| Requirement | Metric | Criteria | Constraint |
|---|---|---|---|
| The compute resource should be cheap or free to use. | Cost (CAD) | Lower is better | No more than ??? |
| The compute resource should have enough RAM/VRAM to train the model. | Gigabytes (GB) of RAM/VRAM | More is better | At least ??? |
| The compute resource should have enough cores to train the model. | Number of cores | More is better | At least ??? |
| The compute resource should have enough storage to contain the dataset. | Gigabytes of Storage (GB) | More is better | At least ??? |
| The compute resource should be easy to use | |||
| The compute resource should be easy for everyone to access |
UTAT Remote Desktop: GPU is old - probably not powerful enough
ECF:

CompSci compute?:
GitHub:
Getting started with GitHub Codespaces for machine learning - GitHub Docs

Colab:

Kaggle Code:
Azure ML

Amazon SageMaker

| Compute Resource | Cost | Chip(s) | RAM/VRAM | Cores | Storage | Ease of Use |
|---|---|---|---|---|---|---|
| ECF | Free | ??? | Access through VM, would need ECF login | |||
| CompSci | Free(?) | ??? | ||||
| GitHub Codespaces | $2.88/hr | 128 GB | 32 cores | Already integrated with GitHub - easy | ||
| Google Colab | $13.99/month, buy additional compute at $13.99 for 100 Compute Units | A100 GPU or V100 GPU | 80 GB or 32 GB | 640 Tensor Cores, 5,120 CUDA Cores | Would need to move code to Colab, not sure if multiple people can use it | |
| Kaggle Code | Free | P100 GPU | 16 GB | 3584 CUDA Cores | Would need to move code to Kaggle, but everyone can use it | |