This project-driven seminar teaches students to build geospatial foundation models (GFMs) from scratch. Students implement every layer of the pipelineβfrom data pipelines and tokenization through attention mechanisms, full architectures, pretraining, evaluation, and deploymentβculminating in a working end-to-end GFM tailored to a chosen geospatial application.
By the end of the course, students will be able to:
Design and implement geospatial data pipelines for multi-spectral, spatial, and temporal data
Build attention mechanisms and assemble transformer-based architectures for geospatial inputs
Pretrain using masked autoencoding and evaluate learned representations
Fine-tune models for specific Earth observation tasks
Deploy models via APIs and interactive interfaces with honest performance analysis
Getting Started with the UCSB AI Sandbox
Here are detailed instructions for setting up the class environment on the UCSB AI Sandbox, including foundation model installation and GPU optimization. This should all be taken care of for the class, but could be helpful if you are interested in deploying our class infrastructure on a different server or a local machine.
---title: "Building Geospatial Foundation Models"subtitle: "Department of Geography"description: "Fall 2025"title-block-banner: falsetoc: false---{height=5in fig-align="center" alt="Geospatial AI visualization"}::: {.gray-text .center-text}*Advancing environmental monitoring through AI*:::## Course DescriptionThis project-driven seminar teaches students to build geospatial foundation models (GFMs) from scratch. Students implement every layer of the pipelineβfrom data pipelines and tokenization through attention mechanisms, full architectures, pretraining, evaluation, and deploymentβculminating in a working end-to-end GFM tailored to a chosen geospatial application.By the end of the course, students will be able to:- Design and implement geospatial data pipelines for multi-spectral, spatial, and temporal data- Build attention mechanisms and assemble transformer-based architectures for geospatial inputs- Pretrain using masked autoencoding and evaluate learned representations- Fine-tune models for specific Earth observation tasks- Deploy models via APIs and interactive interfaces with honest performance analysis## Getting Started with the UCSB AI Sandbox[Here](../installation/GRIT_SETUP.md) are detailed instructions for setting up the class environment on the UCSB AI Sandbox, including foundation model installation and GPU optimization. This should all be taken care of for the class, but could be helpful if you are interested in deploying our class infrastructure on a different server or a local machine. ## Course Structure: 3 Stages, 10 Weeks```{mermaid}flowchart TD subgraph Stage1 ["ποΈ Stage 1: Build GFM Architecture"] direction LR W1["π<br/>Week 1<br/>Data Foundations<br/>Pipelines & Tokenization"] --> W2["π§ <br/>Week 2<br/>Attention Mechanisms<br/>Spatial-Temporal Focus"] W2 --> W3["ποΈ<br/>Week 3<br/>Complete Architecture<br/>Vision Transformer"] end subgraph Stage2 ["π Stage 2: Train Foundation Model"] direction LR W4["π<br/>Week 4<br/>Pretraining<br/>Masked Autoencoder"] --> W5["β‘<br/>Week 5<br/>Training Optimization<br/>Stability & Efficiency"] W5 --> W6["π<br/>Week 6<br/>Evaluation & Analysis<br/>Embeddings & Probing"] W6 --> W7["π<br/>Week 7<br/>Model Integration<br/>Prithvi, SatMAE"] end subgraph Stage3 ["π― Stage 3: Apply & Deploy"] direction LR W8["π―<br/>Week 8<br/>Fine-tuning<br/>Task-Specific Training"] --> W9["π<br/>Week 9<br/>Deployment<br/>APIs & Interfaces"] W9 --> W10["π€<br/>Week 10<br/>Presentations<br/>Project Synthesis"] end Stage1 --> Stage2 Stage2 --> Stage3 style Stage1 fill:#e3f2fd style Stage2 fill:#fff3e0 style Stage3 fill:#e8f5e8 style W1 fill:#bbdefb style W4 fill:#ffe0b2 style W8 fill:#c8e6c8```### ποΈ Stage 1: Build GFM Architecture (Weeks 1-3)- Week 1: Geospatial Data Foundations (data pipelines, tokenization, loaders)- Week 2: Spatial-Temporal Attention Mechanisms (from-scratch implementation)- Week 3: Complete GFM Architecture (Vision Transformer for geospatial)### π Stage 2: Train a Foundation Model (Weeks 4-7)- Week 4: Pretraining Implementation (masked autoencoder)- Week 5: Training Loop Optimization (stability, efficiency, mixed precision)- Week 6: Model Evaluation & Analysis (embeddings, probing, reconstructions)- Week 7: Integration with Existing Models (Prithvi, SatMAE)### π― Stage 3: Apply & Deploy (Weeks 8-10)- Week 8: Task-Specific Fine-tuning (efficient strategies, few-shot)- Week 9: Model Implementation & Deployment (APIs, UI, benchmarking)- Week 10: Project Presentations & Synthesis## Course Sessions- Weekly sessions: see navbar β π» weekly sessions## Teaching Team<br>::: {.grid}::: {.g-col-12 .g-col-md-4}::: {.center-text .body-text-l}**Instructor**:::{width=45% fig-align="center"}::: {.center-text}[**Kelly Caylor**]{.teal-text} **Email:** [caylor@ucsb.edu](mailto::caylor@ucsb.edu)**Learn more:** [Bren profile](https://bren.ucsb.edu/people/kelly-caylor)::::::::: {.g-col-12 .g-col-md-4}::: {.center-text .body-text-l}**TA**:::{width=45% fig-align="center"}::: {.center-text}[**Anna Boser**]{.teal-text} **Email:** [anaboser@ucsb.edu](mailto::annaboser@ucsb.edu)**Learn more:** [Bren profile](https://bren.ucsb.edu/people/anna-boser):::::::::