Judgment and Responsibility in the Age of AI

Judgment and Responsibility in the Age of AI. Group photo with VentureWell President and CEO Phil Weilerstein; Tamara Baynham, independent medical device consultant and patent agent; Joe Smith, chief science officer at Becton Dickinson and a practicing cardiologist; and Josh Makower, co-founder and director of the Stanford Byers Center for Biodesign.

By Phil Weilerstein

I remember when calculators first entered the classroom. My father had taught me how to use a slide rule, a skill once foundational to engineering, math, and science. I learned it carefully, deliberately.

And then I never used it again.

The calculator didn’t simply replace the slide rule; it forced educators to reconsider what critical skills and knowledge truly mattered as tools evolved. That transition took time. We debated, adjusted approaches, and wrestled with what students needed to understand, not just perform.

Today, we face a similar moment with the onset of artificial intelligence (AI), only the pace is far faster, the consequences far greater.

In education and innovation, change is happening at breathtaking speed. AI is transforming how knowledge is accessed, how design decisions are made, and how early-stage innovators are expected to perform. These changes are avalanching, often outpacing educators’ and university leaders’ efforts to define what judgment and responsibility should look like in an AI-enabled world.

That tension was at the heart of a recent BME-IDEA panel that brought together leaders from across the biomedical engineering and medtech innovation ecosystem. Three experienced innovators: Tamara Baynham, independent medical device consultant and patent agent; Joe Smith, chief science officer at Becton Dickinson and a practicing cardiologist; and Josh Makower, co-founder and director of the Stanford Byers Center for Biodesign, reflected on how to prepare the next generation of innovators for a healthcare landscape increasingly shaped by AI.

The discussion focused on the human judgment and responsibility that must guide the use of AI in developing innovations. The challenge is not simply adopting new technologies, but deciding what must remain firmly human as those tools become ubiquitous. What emerged was not a set of answers, but a sharper set of questions that educators, industry leaders, and institutions would do well to embrace.

From Tool Use to Discernment

As AI becomes more prevalent in biomedical innovation, the focus must shift from simply using tools to exercising discernment. Critical use of AI outputs is learned, not instinctive—even for digital natives. Students need guidance on effectively prompting AI tools and skillfully assessing whether their outputs can be trusted. This raises a deeper question: What foundational knowledge must students internalize to exercise sound judgment?

Tamara Baynham noted that students often fixate on product outcomes rather than the design process. True preparation, she argued, requires mastery of design controls and U.S. Food and Drug Administration (FDA) guidance: translating user requirements into design specifications, documenting rationale, and developing early regulatory awareness.

Photo of panel with VentureWell President and CEO Phil Weilerstein; Tamara Baynham, independent medical device consultant and patent agent; Joe Smith, chief science officer at Becton Dickinson and a practicing cardiologist; and Josh Makower, co-founder and director of the Stanford Byers Center for Biodesign.

Faculty must teach students not only to operate tools, but to interrogate what those tools produce. AI has limits and flaws. Used thoughtfully, it can accelerate innovation, but it cannot replicate empathy or ethical reasoning.

Embedding empathetic design into the educational process is critical. Joe Smith emphasized that good design emerges from an ability to “live with that patient, see the suffering, understand the stress of the surgeons and doctors treating these people, and tune into that frequency.”

These uniquely human capacities remain essential to biomedical engineering and are the foundation on which improvements to cutting-edge treatment rest.

Learning Through Failure

The most enduring lessons come through failure. In engineering and design, failure is instructional, with lessons best absorbed through reflection on real-world experience. Smith highlighted two competencies central to this process: clinical intimacy and storytelling.

Immersing students in clinical environments is essential to help them see firsthand how devices are actually used—where they succeed, where they fall short, and why. Equally critical is the ability to communicate those insights clearly: both the technical results and why they matter.

As AI becomes more prevalent in biomedical innovation, the focus must shift from simply using tools to exercising discernment.

Building these skills demands hands-on, iterative experience. Experiential learning lets students test hypotheses and see how design decisions ripple through clinical, regulatory, and user contexts. As Smith noted: “It’s important for biomedical engineers joining the workforce to gain intimate exposure to the chaotic clinical environment where their products and services will be deployed. It’s the only way to understand the true unmet need.”

Through deliberate practice and real-world exposure, students hone the judgment, communication, problem-solving, and critical-thinking skills needed to navigate complexity and confront uncertainties that AI alone cannot resolve.

Taking Responsibility

In biomedical engineering, the stakes are high: Design choices affect real patients, not hypothetical scenarios. As Josh Makower observed, innovation begins with deep expertise; creativity and entrepreneurship amplify impact, but only when grounded in disciplined scientific evaluation of effectiveness.

AI is not a neutral actor. Its use carries ethical, regulatory, and environmental biases that lead to consequences. Curricula must embed skepticism and discernment as explicit learning objectives. Faculty must model judgment as intentionally as technical fluency. The risk is not AI itself, but its unexamined use.

AI is already embedded in biomedical industry workflows, from software-as-a-medical-device and clinical decision support to documentation, supply chain, and regulatory processes. Students entering this world need to understand that AI needs oversight, not blind trust. Like an apprentice or junior colleague, it requires supervision, performance evaluations, and course correction.

Industry has recognized this shift, investing in training programs to help employees manage AI effectively as production cycles accelerate and expectations evolve. The questions are managerial as much as technical: Is the output reliable? How is performance evaluated over time? Where do validation and allocation belong?

When students learn to ask sharper questions, they learn to take responsibility for AI’s use. In environments where the cost of error is high, judgment cannot be automated. Accountability cannot be outsourced to an algorithm. AI can amplify human capabilities, but it cannot replace human responsibility.

The Opportunity Before Us

AI must be integrated into learning and innovation without eroding what makes biomedical engineering fundamentally human. This is a long game. Focusing solely on immediate disruptions risks losing sight of education’s broader responsibilities.

We need to ask harder questions: What critical thinking skills are needed to use AI well? What responsibilities do faculty carry beyond compliance? Who is accountable when AI-informed decisions cause harm? What can be adapted now without sacrificing the long view? And what does it mean to be a responsible innovator when efficiency can overshadow empathy?

AI will continue to evolve. That much is certain. What isn’t predetermined is how we prepare students to engage with it. The opportunity before us is clear: elevate experiential education so competencies are both learned and demonstrated, and re-emphasize the essential human skills AI cannot supply. The best-prepared biomedical engineers will then be better prepared to know how and when to use these tools.

And when not to.


Phil Weilerstein has led VentureWell since its founding in 1995 and today serves as president and CEO. By developing and expanding VentureWell’s programs on a national and global scale, Phil has helped advance VentureWell’s mission to solve global challenges through science- and technology-driven innovation and entrepreneurship. Phil is committed to sharing VentureWell’s learnings and resources to support the creation of inclusive and more equitable pathways for student innovators to succeed in venture creation. Under Phil’s leadership, VentureWell has collaborated with key science funding agencies, major philanthropies, and hundreds of universities to train and support thousands of emerging students, researchers, and faculty innovators.

Phil attended the University of Massachusetts, where he was a co-founder of a biotechnology company developing naturally occurring pest control products. He is a founder and past chair of the ASEE Entrepreneurship Division, and a recipient of the 2008 Price Foundation Innovative Entrepreneurship Educators Award, the 2014 Engineering Entrepreneurship Pioneers Award from ASEE, the 2016 Deshpande Symposium Award for Outstanding Contributions to Advancing Innovation and Entrepreneurship in Higher Education, and a 2025 Sentinel Award from the National Academy of Inventors.

Sign Up for the VentureWell Newsletter

×

    I'd best describe myself as a:

    By continuing to use the site, you agree to the use of cookies. Read More