Wing Selector MLP

This repository contains a PyTorch MLP that scores aircraft-style wings within the same airfoil for a chosen objective:

  • min_cd (minimize drag),
  • max_cl (maximize lift),
  • max_ld (maximize lift-to-drag).

It was trained on the dataset ecopus/transport-wings-500.

Files

  • best.pt – best checkpoint by validation top-1@group  
  • last.pt – final checkpoint after training  
  • config.json – input dim, #airfoils, feature scaler stats  
  • feature_names.json – expected feature order  
  • airfoil_vocab.json – airfoil name β†’ id mapping used during training  
  • inference.py – minimal loader & scoring helper

Model Architecture

The model is a feedforward neural network designed for a binary classification task. It predicts the "best" wing geometry for a given airfoil and aerodynamic objective.

  • Inputs: The model takes three inputs:
    1. Wing Features: A vector of 22 continuous features describing the wing's geometry and aerodynamic properties. These features are standardized (mean-centered and scaled by standard deviation) before being fed into the model.
    2. Objective ID: A one-hot encoded vector representing one of the three possible design objectives.
    3. Airfoil ID: An embedding vector that learns a representation for each unique airfoil in the training data.
  • Embedding Layer: An nn.Embedding layer converts the discrete airfoil ID into a dense 8-dimensional vector.
  • Hidden Layers: The core of the network consists of two fully connected hidden layers, each with 128 neurons and using the ReLU activation function.
  • Output Layer: A final linear layer outputs a single logit, which represents the model's prediction score.

Training Hyperparameters

Hyperparameter Value
Epochs 50
Batch Size 64
Learning Rate 2e-3
Optimizer AdamW
LR Scheduler CosineAnnealingLR
Loss Function BCEWithLogitsLoss
Seed 42

Final Training Metrics

Metric Value
Validation AUC 0.9790
Validation Avg. Precision 0.638
Validation Top-1 Accuracy 76.0%
  • AUC: measures how well the model separates "better vs. worse" items across all possible score thresholds. E.g., if you randomly take one "positive" candidate and one "negative", the model puts a higher score on the positive 97.9% of the time.
  • Average Precision: the area under the Precision-Recall curve. It emphasizes how pure the top of the ranking is as you move down the list of candidates. This value means that as you sweep through candidates from best to worse, the precisioned maintained on "positive" scores averages to 0.64.
  • Top-1 Accuracy: the fraction where the model's #1 choice equals the true best among cases where there is a single ground truth best.
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train ecopus/wing-selector-mlp

Spaces using ecopus/wing-selector-mlp 2