The NASA Cosmic Origins Program AI/ML Science and Technology Interest Group (AI/ML STIG) addresses the critical need to upskill the astronomy community with AI literacy. We provide structured, domain-specific AI education through stackable, bite-sized modular training designed for astronomical research contexts.
The NASA Cosmic Origins Program AI/ML Science and Technology Interest Group (AI/ML STIG) addresses the critical need to upskill the astronomy community with AI literacy. We provide structured, domain-specific AI education through stackable, bite-sized modular training designed for astronomical research contexts.
Established under the Cosmic Origins Program Analysis Group (COPAG), the STIG brings together researchers and educators to build a comprehensive AI education framework tailored for the astronomy community.
All lecture materials are open-source and designed to serve as templates for speakers and educators in the astronomy community.
The Future of AI and the Mathematical and Physical Sciences
Jesse Thaler, MIT
An overview of how AI is transforming the mathematical and physical sciences, exploring opportunities and strategic priorities for the astronomy and astrophysics community. Based on the NSF Future of AI+MPS Workshop white paper.
Topics Covered:
Large Language Models as Research Agents: Part 1
Yuan-Sen Ting, The Ohio State University
Learn the fundamentals of working with LLM APIs—making calls, managing conversations, and crafting effective prompts. Master key parameters, build multi-turn conversations, and implement prompting strategies for research tasks.
Topics Covered:
Adapted from Coding Essentials for Astronomers.
Ting, Y.-S. (2025). Coding Essentials for Astronomers. Zenodo. DOI: 10.5281/zenodo.17850426
Large Language Models as Research Agents: Part 2
Yuan-Sen Ting, The Ohio State University
Break through LLM limitations with function tools and Retrieval Augmented Generation. Build astronomical calculation tools, implement document-based Q&A, and create powerful research assistants.
Topics Covered:
Adapted from Coding Essentials for Astronomers.
Ting, Y.-S. (2025). Coding Essentials for Astronomers. Zenodo. DOI: 10.5281/zenodo.17850426
Large Language Models as Research Agents: Part 3
Francisco Villaescusa-Navarro, Flatiron Institute CCA
Explore autonomous AI agents that can reason, act, and collaborate. Learn about multi-agent systems, LangGraph workflows, and how AI agents are revolutionizing scientific research from literature search to experiment design.
Topics Covered:
Large Language Models as Research Agents: Part 4
Yuan-Sen Ting, The Ohio State University
Introduction to the Model Context Protocol (MCP) for connecting LLMs to external data and tools. Learn how to build MCP servers to expose local resources and APIs to AI assistants.
Topics Covered:
Adapted from Coding Essentials for Astronomers.
Ting, Y.-S. (2025). Coding Essentials for Astronomers. Zenodo. DOI: 10.5281/zenodo.17850426
Deep Learning Frameworks: Part 1
Phill Cargile, Harvard-Smithsonian CfA
Master the fundamentals of PyTorch and automatic differentiation. Learn how to build, train, and optimize neural networks using PyTorch's powerful autodiff engine. From basic tensor operations to training your first neural network.
Topics Covered:
Deep Learning Frameworks: Part 2
Phill Cargile, Harvard-Smithsonian CfA
Dive into JAX, Google's high-performance numerical computing library. Learn how JAX combines NumPy-like syntax with automatic differentiation, vectorization, and JIT compilation to accelerate scientific computing and machine learning workflows.
Topics Covered:
Duration: 26-week series (November 1, 2025 - May 31, 2026)
Format: Weekly 1-hour sessions (40-45 mins + questions)
Time: Mondays at 4:00 PM ET
Delivery: Remote only
Meeting Link: Link can be found at NASA AI/ML STIG Official Page
| Week | Date | Topic | Speaker |
|---|---|---|---|
| 1 | Nov 3 | Overview | Jesse Thaler, MIT |
| Module 1: Large Language Models as Autonomous Agents | |||
| 2 | Nov 10 | LLM API Basics | Yuan-Sen Ting, OSU |
| 3 | Nov 17 | RAG & Function Tools | Yuan-Sen Ting, OSU |
| 4 | Nov 24 | LLM as Agent | Francisco Villaescusa-Navarro, Flatiron |
| 5 | Dec 8 | Model Context Protocol (MCP) | Yuan-Sen Ting, OSU |
| Module 2: Deep Learning Frameworks | |||
| 6 | Dec 15 | PyTorch and Autodifferentiation | Phill Cargile, Harvard-Smithsonian |
| 7 | Dec 22 | JAX | Phill Cargile, Harvard-Smithsonian |
| Module 3: Neural Network Theory | |||
| 8 | Jan 12 | Inductive Biases | John Wu, STScI |
| Module 4: Neural Network Architectures | |||
| 9 | Jan 19 | Convolutional Neural Networks (CNNs) | John Wu, STScI |
| 10 | Jan 26 | Recurrent Neural Networks (RNNs) | Daniel Muthukrishna, Harvard/MIT |
| 11 | Feb 2 | Graph Neural Networks (GNNs) | Tri Nguyen, Northwestern |
| 12 | Feb 9 | Transformers | TBD |
| Module 5: Physics-Inspired Networks | |||
| 13 | Feb 16 | Equivariant Networks - Theory | TBD |
| 14 | Feb 23 | Equivariant Networks - Applications | TBD |
| Module 6: Generative Models | |||
| 16-20 | Mar 2 - Mar 30 | Normalizing Flows, Diffusion Models, Flow Matching, Simulation-Based Inference | TBD |
| Module 7: Reinforcement Learning | |||
| 21-23 | Apr 6 - Apr 20 | RL Fundamentals, Applications to Instrumentation & Telescope Scheduling | TBD |
| Module 8: Data & Future | |||
| 24 | Apr 27 | Open-Source Datasets and Best Practices | TBD |
| 25 | May 4 | Foundation Models for Astronomy | TBD |
| Town Halls | |||
| 26 | May 11 | Final Town Hall | — |
For inquiries, please contact: ting.74@osu.edu
The AI/ML STIG is open to the national and international community without regard to institutional affiliation, education, or career status. We welcome astronomers, astrophysicists, data scientists, and anyone interested in AI applications in astronomy.
Mailing List: Stay updated on upcoming lectures, events, and resources
Please send an email to AI-ML-STIG-join@lists.nasa.gov with the subject line "Join" to be added to the AI/ML STIG email list.
All lecture recordings will be hosted on the NASA Cosmic Origins Program subpage, making them accessible to the broader community for asynchronous learning.