Wing Selector MLP
This repository contains a PyTorch MLP that scores aircraft-style wings within the same airfoil for a chosen objective:
- min_cd (minimize drag),
- max_cl (maximize lift),
- max_ld (maximize lift-to-drag).
It was trained on the dataset ecopus/transport-wings-500.
Files
best.ptβ best checkpoint by validation top-1@grouplast.ptβ final checkpoint after trainingconfig.jsonβ input dim, #airfoils, feature scaler statsfeature_names.jsonβ expected feature orderairfoil_vocab.jsonβ airfoil name β id mapping used during traininginference.pyβ minimal loader & scoring helper
Model Architecture
The model is a feedforward neural network designed for a binary classification task. It predicts the "best" wing geometry for a given airfoil and aerodynamic objective.
- Inputs: The model takes three inputs:
- Wing Features: A vector of 22 continuous features describing the wing's geometry and aerodynamic properties. These features are standardized (mean-centered and scaled by standard deviation) before being fed into the model.
- Objective ID: A one-hot encoded vector representing one of the three possible design objectives.
- Airfoil ID: An embedding vector that learns a representation for each unique airfoil in the training data.
- Embedding Layer: An
nn.Embeddinglayer converts the discrete airfoil ID into a dense 8-dimensional vector. - Hidden Layers: The core of the network consists of two fully connected hidden layers, each with 128 neurons and using the ReLU activation function.
- Output Layer: A final linear layer outputs a single logit, which represents the model's prediction score.
Training Hyperparameters
| Hyperparameter | Value |
|---|---|
| Epochs | 50 |
| Batch Size | 64 |
| Learning Rate | 2e-3 |
| Optimizer | AdamW |
| LR Scheduler | CosineAnnealingLR |
| Loss Function | BCEWithLogitsLoss |
| Seed | 42 |
Final Training Metrics
| Metric | Value |
|---|---|
| Validation AUC | 0.9790 |
| Validation Avg. Precision | 0.638 |
| Validation Top-1 Accuracy | 76.0% |
- AUC: measures how well the model separates "better vs. worse" items across all possible score thresholds. E.g., if you randomly take one "positive" candidate and one "negative", the model puts a higher score on the positive 97.9% of the time.
- Average Precision: the area under the Precision-Recall curve. It emphasizes how pure the top of the ranking is as you move down the list of candidates. This value means that as you sweep through candidates from best to worse, the precisioned maintained on "positive" scores averages to 0.64.
- Top-1 Accuracy: the fraction where the model's #1 choice equals the true best among cases where there is a single ground truth best.
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support