Interdisciplinary Workshop

Philosophy Sees the Algorithm: Reconsidering Knowledge and Community in AI-Based Science

An interdisciplinary dialogue on the transformation of scientific practice.

Dates December 11-12, 2025
Location The Ohio State University
Caldwell 0120 / PRB 4138
Format In-Person Workshop

Scientific Background

Philosophy Sees the Algorithm Workshop

The last decade has witnessed the increasing inclusion of state-of-the-art artificial intelligence in multiple scientific domains as a research tool. From astronomy to biology, scientists have been exploring diverse functions of AI, such as experiment automation, data processing, pattern identification, statistical inference, simulation acceleration, and complex modeling. The maturity of large language models has also elevated AI as a research assistant, aiding multiple stages of research planning, including literature survey, hypothesis generation, grant application and publication review.

Just as new instruments like telescopes and computer simulations have led to significant scientific changes, one can expect the growing use of AI to reshape both how scientific knowledge is generated and how the scientific community works. To characterize, understand, and navigate this potential "ongoing historical moment" in science requires interdisciplinary collaboration. This workshop brings in scientific, philosophical and sociological perspectives to answer central questions pertaining to the potentially new shape of knowledge and scientific community.

Organizers

Yuan-Sen Ting

The Ohio State University

Siyu Yao

University of Cincinnati
Philosophy

Andre Curtis-Trudel

University of Cincinnati
Philosophy

Workshop Goals

The workshop aims to foster dialogue and collaboration between scientists who develop or apply AI and humanities scholars who conduct philosophical, historical, and sociological analysis of AI-induced scientific changes.

Central Questions

  • How, if at all, is AI contributing to scientific progress?
  • Does AI pose new risks to valuable aspects of science, such as objectivity, rigor, and understanding?
  • What kinds of disciplinary or interdisciplinary expertise are needed in the AI era?
  • What kind of community organization do we need to foster collaboration?

Detailed Schedule & Speakers

The two-day workshop will be structured around two themes: "AI Inside the Research Pipeline" and "AI and the Scientific Ecosystem". Each theme will include multiple subtopics, addressed through talks and discussions by philosophers and scientists. Each session will feature multiple talks followed by Q&A and panel discussions, facilitating productive dialogues and potential collaborations.

AI Inside the Research Pipeline

Investigating how AI reframes experimentation, data practice, and the standards of progress within scientific projects.

AI and the Scientific Ecosystem

Exploring how communities, institutions, and interdisciplinary collaborations adapt to AI-driven approaches.

December 11, 2025 AI Inside the Research Pipeline

📍 Location: Caldwell Laboratory 0120

8:50 AM - 9:20 AM — Registration & Coffee
9:20 AM - 9:40 AM — Welcome Speech
Panel 1 — The Goal of Science vs. the Possibilities of AI
Schedule:
  • 9:40 - 10:00 AM: Tanya Berger-Wolf — TBD (15 min + 5 min Q&A)
  • 10:00 - 10:20 AM: Andre Curtis-Trudel — AI and Scientific Pursuit (15 min + 5 min Q&A)
10:20 AM - 11:00 AM — Coffee Break (Join Astronomy Coffee at 10:30)
Panel 1 (Continued)
Schedule:
  • 11:00 AM - 12:00 PM: Panel Discussion (Tanya Berger-Wolf, Andre Curtis-Trudel, Siyu Yao, Patrick Osmer)
View Discussion Questions
  • What does it mean for science to progress? What's the difference between progress, development, and mere change?
  • Has science changed its most important research goals or methodologies due to other tools before AI? Is science experiencing another change as AI becomes increasingly used? How special is AI as a research tool?
  • How do we evaluate whether a scientific change is progressive or not? Are there changes that are not progressive?
  • Progress is always progress toward a goal or ideal, connecting to the promotion of certain epistemic values. What goals do scientists prioritize and what are some relevant values?
  • Does the opacity of AI and the automation of research sacrifice understanding for predictability?
  • Are the promises of AI-induced progress similar across research domains? Are there some domains that might benefit more from AI and why?
  • What are the distinctions between AI merely facilitating discoveries and AI making discoveries?
12:00 PM - 1:30 PM — Lunch
Panel 2 — Philosophical Issues of AI as Research Agents
Schedule:
  • 1:30 - 1:50 PM: Yuan-Sen Ting — How Do I Shorten My Project Worktime by a Factor of Two Every Semester (15 min + 5 min Q&A)
  • 1:50 - 2:10 PM: Mel Andrews — AI for Peer Review and the Future of Science? (15 min + 5 min Q&A)
  • 2:10 - 2:30 PM: Angus Fletcher — Einstein, Herschel, Darwin: Why AI Can't Do Foundational Science (15 min + 5 min Q&A)
2:30 PM - 3:00 PM — Coffee Break (Open to local visitors)
Panel 2 (Continued)
Schedule:
  • 3:00 - 4:00 PM: Panel Discussion (Mel Andrews, Angus Fletcher, Helen Meskhidze, Yuan-Sen Ting)
View Discussion Questions
  • What are the current practices of designing AI research agents? What are some successes and pitfalls?
  • What are some differences between human and AI styles of cognition and inquiry?
  • What roles can AI take in research? Are they like main agents, collaborators, students, educators, arbiters, or instruments? What difference does it make for us to see AI in different roles?
  • How do we justify inductive trust in LLM agents? Does its opacity affect our trust? Can we have trust in LLM without understanding it? Does trusting require knowing the mechanisms or prompts?
  • As research agents, do LLMs have similar elements like beliefs, desires, and intentions? How does one attribute credit and responsibility to LLMs?
  • How are AI agents evaluated against existing scientific norms—replicability, peer review, transparency? Are they still important or need to be revised?
  • Is doing science a goal in itself for the person doing it, or is it more important to obtain scientific knowledge, whether done by humans or AI?
6:00 PM — Conference Dinner or Reception
December 12, 2025 AI and the Scientific Ecosystem

📍 Location: Physics Research Building (PRB) 4138

9:00 AM — Coffee
Panel 3 — Scientific Community Dynamics and Resource Distribution
Schedule:
  • 9:20 - 9:40 AM: David Weinberg — Can AI Revolutionize Scientific Discovery? (15 min + 5 min Q&A)
  • 9:40 - 10:00 AM: Moh Hosseinioun — AI Reshapes the Practice of Science: Evidence from Two Decades of Research Proposals (15 min + 5 min Q&A)
  • 10:00 - 10:20 AM: Benjamin Santos Genta (Remote) — No AI Reproducibility? No Problem (15 min + 5 min Q&A)
10:20 AM - 11:00 AM — Coffee Break (Join Astronomy Coffee at 10:30)
Panel 3 (Continued)
Schedule:
  • 11:00 AM - 12:00 PM: Panel Discussion (Moh Hosseinioun, Josh Greenberg, David Weinberg, Andre Curtis-Trudel)
View Discussion Questions
  • Is more funding or other institutional resources devoted to research projects that use AI in different scientific disciplines? Who makes the decisions and what are the underlying rationales? Is it expected to be long-term or temporary?
  • What are some known costs and gains of AI-based research at the present stage? Does AI-based research generate more outcomes on average? What types of outcomes?
  • How are scientists gearing themselves toward developing AI-based research projects? For example, by learning to use AI themselves, collaborating with scientists who use AI, or collaborating with computer scientists? In other words, are there more interdisciplinary research groups?
  • How is labor distributed in these research communities? What roles do AI-fluent team members take? How about the credit of research?
  • Where are AI-proficient scientists going in their careers? What are the benefits and curses of interdisciplinary identity?
  • Are we witnessing a divide between 'compute-rich' and 'compute-poor' science? Are there emerging inequalities or even hierarchies between them?
  • What are the different research agendas and tastes of emerging AI centers or initiatives? Are the expectations of AI-based science homogeneous?
12:00 PM - 1:30 PM — Lunch
Panel 4 — A Toolkit for the Tool: What Do Scientists Need to Use AI Well?
Schedule:
  • 1:30 - 1:50 PM: Bryan Carstens — Implementing AI Solutions for Specific Applications and Discovering New Bat Species in the Process (15 min + 5 min Q&A)
  • 1:50 - 2:10 PM: Kati Kish Bar-On (Remote) — A Tool or a Collaborator? Rethinking Mathematical Intuition, Agency, and Understanding in the Age of AI (15 min + 5 min Q&A)
  • 2:10 - 2:30 PM: James Phelan — AI as Creative and Uncreative Writer: A Rhetorician's Perspective (15 min + 5 min Q&A)
2:30 PM - 3:00 PM — Coffee Break (Open to local visitors)
Panel 4 (Continued)
Schedule:
  • 3:00 - 3:20 PM: Siyu Yao — Integration of AI in Science: Beyond "the Illusion of Understanding" Toward Pragmatic Understanding (15 min + 5 min Q&A)
  • 3:20 - 4:20 PM: Panel Discussion (Bryan Carstens, Siyu Yao, Eric Fosler-Lussier, James Phelan)
View Discussion Questions
  • What counts as AI fluency/literacy? What kinds of knowledge or expertise do scientists need in order to use a new tool like AI?
  • What kind of scientists do we expect to play a leading role in the future AI-based research?
  • What kinds of scientific judgment cannot be automated, and how can scientists cultivate those skills to work alongside AI?
  • What kinds of data infrastructures—curation, metadata, provenance tracking—are essential for meaningful AI use in real scientific contexts?
  • What are the minimal infrastructural standards (documentation, reproducibility pipelines, version control) that allow AI results to be trusted?
  • Narratives, or telling good stories, are important to science. What does a good scientific story require? Are narratives always human? How well can AI do it?
  • How do scientists decide when AI is an exploratory tool versus when it becomes a part of the evidential chain?
  • How does the opacity of AI complicate peer review and open science?
4:30 PM - 5:00 PM — Closing Remarks

Confirmed Speakers / Panelists

  • · David Weinberg, Astronomy, The Ohio State University
  • · Angus Fletcher, English, The Ohio State University
  • · Moh Hosseinioun, Computational Social Science, Northwestern University
  • · Josh Greenberg, Alfred P. Sloan Foundation
  • · Mel Andrews, Philosophy, Princeton University
  • · Bryan Carstens, Evolution, Ecology and Organismal Biology, The Ohio State University
  • · Helen Meskhidze, Philosophy, University of Cincinnati
  • · Eric Fosler-Lussier, Computer Science and Engineering, The Ohio State University
  • · Kati Kish Bar-On, Philosophy, Boston University
  • · Benjamin Santos Genta, Philosophy, New York University
  • · Yuan-Sen Ting, Astronomy, The Ohio State University
  • · Siyu Yao, Philosophy, University of Cincinnati
  • · Andre Curtis-Trudel, Philosophy, University of Cincinnati
  • · Tanya Berger-Wolf, Computer Science and Engineering, The Ohio State University
  • · Patrick Osmer, Astronomy, The Ohio State University
  • · James Phelan, English, The Ohio State University

Registration

Register for the Workshop

Registration is now open. Limited spots available.

Submit Talk/Panel Proposal

We invite proposals for talks and panel participation.

View the official workshop announcement on the CCAPP website.

Financial Support

This workshop is made possible through generous support from:

Center for Cosmology and AstroParticle Physics

The Ohio State University

Alfred P. Sloan Foundation

Metascience and AI Postdoc Fellowship

UC Center for Humanities and Technology

University of Cincinnati