第四届“华为杯”无线通信算法大赛
AI-Enabled Robust SVD Operator for Wireless Communication
End-to-end network that predicts(U, S, V)for large MIMO channels without any QR/SVD/EVD/inversion inside the network.
Final score aligns with the official metric:Score = 100 × AE + MACs.
Competition Result: Final Score 63.0 — 4th place (Third Prize) at Huawei Tech Arena 2025.
- Axial Low-rank Frequency Gate (ALF): 2D-FFT front-end gating keeps informative angle/delay bands; hidden size is structurally prunable.
- Grouped Projected Attention (GPA): K/V column projection with length
k_len→ complexityO(T·k_len); interpretable column dictionary; prunable by energy. - Gated Depthwise-Conv: local smoothing with very low MACs.
- Neural Ortho Refiner (NOR): one-step, decomposition-free orthogonality refinement on the Stiefel manifold; learns step sizes
(a, b). - Spectral Self-Calibration (SSC): annealed blend of predicted
Sand measured|diag(UᴴHV)|. - AEPlus Loss:
L_rec + λ L_ortho + w_E L_energy + w_D L_diag + w_S L_smatchwith smooth schedules. - Step-3 Structured Pruning: rebuild a smaller isomorphic network (no masks) → real MACs drop under PyTorch Profiler; short finetune recovers AE.
- Train/Prune/Infer Unified: one
model.pydriving the full pipeline.
.
├── model.py # train / prune_ft / infer
├── scripts/ # convenience shell scripts
├── docs/ # analysis plots used in README
├── ckpts/ # checkpoints (ignored by git)
├── submissions/ # npz outputs (ignored by git)
├── requirements.txt
└── .gitignore
conda create -n svdnet python=3.10 -y
conda activate svdnet
pip install -r requirements.txtPlace the official data (e.g. CompetitionData2) under ./data2/, following the organizer’s naming:
data2/
Round2CfgData1.txt
Round2CfgData2.txt
Round2CfgData3.txt
Round2CfgData4.txt
Round2TrainData1.npy...
Round2TrainLabel1.npy...
Round2TestData1.npy...
python model.py \
--mode train \
--data_dir ./data2 \
--out_dir ./submissionsDuring training, the code measures real Mega MACs using
torch.profilerand printsval_AE,MACsandScore.
python model.py \
--mode prune_ft \
--data_dir ./data2 \
--src_ckpt ./ckpts/preprune.pth \
--keep_klen 28 --keep_hidden 28 \
--ft_epochs 60 --ft_lr 2e-4python model.py \
--mode infer \
--data_dir ./data2 \
--ckpt ./ckpts/best.pth \
--out_dir ./submissions/round2This produces 1.npz…4.npz under --out_dir, each containing keys U, S, V, C.
C is the Mega MACs measured by PyTorch Profiler, used in the final score.
- Compliance: No QR/SVD/EVD/inversion is used inside the network. Orthogonality is enforced by loss + NOR.
- Normalization: per-sample Frobenius normalization during train/test; we rescale
Sback at inference. - Augmentation: power-aware complex noise and random antenna (row/column) dropout.
- Complexity:
get_avg_flops()calls PyTorch Profiler to compute real MACs / sample, avoiding proxy FLOPs.
The defaults in make_cfg() reflect our final setting:
DIM=64,DEPTH=2,GROUPS=4,K_LEN=32 → 28(after pruning),GATE_HIDDEN=32 → 28- Schedules:
λramp;τ: 0.90 → 0.60;w_E, w_D, w_Ssmoothly decay. - Finetune:
epochs=60,lr=2e-4.
This repository is for the competition/research purpose. Commercial use is subject to the organizer’s policy.
Thanks to the organizers and the open-source community.


