In an era marked by unprecedented technological acceleration — where artificial intelligence can generate apps, mimic voices and influence global systems — the stakes of computer science ethics have never been higher. At UC Santa Barbara, one professor is taking a powerful step to ensure that future technologists are equipped not only with technical skill, but also ethical responsibility.

Maryam Majedi, an assistant teaching professor in the Computer Science Department at UCSB, is launching the university’s first Embedded Ethics Lab, a project designed to integrate ethical reflection directly into the technical curriculum.

Courtesy of Lilli Walker

“I did my Ph.D. in data privacy, and during that time, I came to realize how deeply privacy is intertwined with broader ethical concerns, many of which I hadn’t encountered during my earlier studies in computer science. This led me to explore a wide range of ethical issues beyond the scope of my original research.”

Her journey took her to the University of Toronto where she joined a pioneering embedded ethics initiative. 

“I became part of the Embedded Ethics Education Initiative team and developed the first two embedded ethics modules, which turned out to be highly successful. We received great feedback from students.” 

She brought that enthusiasm to UCSB. “Here, I’m fortunate to be part of a supportive department that enables me to build the Embedded Ethics program and help our students benefit from it.”

The Embedded Ethics Lab is grounded in the belief that ethics shouldn’t be siloed into separate philosophy courses, but embedded within the core of technical instruction. 

“The core mission is to expose students to the ethical issues that may arise from their work when they become decision makers and leaders in technologies,” Majedi said. “Our goal isn’t to tell students what’s right or wrong, rather it is to help students recognize that they have a responsibility for the ethical implications of the technologies they develop.” 

That responsibility includes “learning to identify and consider other people’s perspectives when developing and designing software and understanding how the technologies they create can impact diverse groups with different backgrounds,” Majedi remarked. 

Rather than rely on separate ethics classes alone, the lab focuses on modules that slot into existing Computer Science courses. These are designed to help instructors teach both technical concepts and their ethical implications simultaneously. 

“The ethics modules are designed to be highly adaptable and can be integrated into a variety of courses depending on their topic. For instance, a module on graph data structures can be paired with examples that explore key privacy concepts,” Majedi shared. Importantly, she added, the modules teach “the technical concept using examples that help students recognize the range of ethical issues that can arise in practice.” 

This design addresses two key barriers: overloaded syllabi and limited instructor time. “Ten weeks is too short,” Majedi said. “The goal is not to add extra material at the expense of the course’s core content.” Instead, “teaching the technical concept with the incorporated ethical dimension without compromising either.”  

Each module generally includes “a concise pre-lecture material assessed by a short quiz, lecture slides, in-class activities and follow-up homework, making it easy for instructors to incorporate without additional burden.”

Majedi emphasized that this approach complements existing courses. She added, “What embedded ethics contributes, and how it complements their goals, is by connecting ethical issues directly to the core concepts at the time they are being taught.”

The lab currently includes about eight students, with the team focused on building infrastructure for scaling ethical instruction across large classes.

The CS3E lab is currently developing five ethics modules designed for CS1 and CS2 courses. These modules explore key topics such as accessibility, AI and creativity, intellectual property, misinformation and data privacy. In addition to this work, they are also designing a course management system called Innostruction aimed at creating a more inclusive and personalized learning experience for students in large classrooms, paired with a content management system called Iris. 

“The goal is to provide an individualized learning experience for each student, ensuring they receive the same level of support and benefit as they would in a small class setting.” 

Accessibility is a core focus: “In Iris, in addition to efficient content representation, we’re focusing on dyslexia and ADHD, as well as promoting equity for students whose first language is not English,” Majedi said. 

The lab’s work has already won several awards at the AI Community of Practice Spring Symposium, including a project by undergraduate student Tianle Yu that won first place in 2024 and two poster awards in 2025 won by students Sammy Lesner and Wong Zhao.

“So far, we’ve had good success, though the modules and the projects are still under development,” Majedi noted. Once ready, they will be shared publicly. 

Majedi is equally thoughtful about the limits of AI in education. While she supports the use of AI to enhance learning experiences, she is clear about its boundaries.

“We need to set clear limits on how much help students can receive,” she said, referring to students using AI tools. “I do not want to eliminate the role of TAs and ULAs.” 

She firmly believes in the value of human involvement in education. “I see AI as a tool to assist and enhance the quality of work, not as a substitute for the creative process or the human experience of learning,” Majedi stated. 

“When it comes to creativity, meaningful learning and the invaluable process of making and learning from mistakes, I don’t want to take that away from my students.”

When asked how the lab will keep pace with future ethical challenges, Majedi acknowledged the uncertainty, but emphasized the importance of equipping students to think critically.

“Exactly which direction we should go, I don’t know,” Majedi said. “AI and LLMs are new phenomena that are impacting everyone’s lives in ways that feel uncontrolled and at times, even a bit scary. But at the same time, it’s incredibly exciting.” 

For Majedi, the key isn’t to predict every challenge, but to prepare students to adapt. 

“We want to make sure we’re educating students who care about these issues. The first step in fostering that care is helping them recognize that these issues exist,” Majedi emphaszied. 

What the Embedded Ethics Lab offers is not a set of fixed answers, but a framework for better questions — a way of preparing students not just to code, but to care. 

“We keep trying,” Majedi said. “My advice to students who want to be more ethical in computer science is to stay open-minded and always be willing to learn.”

A version of this article appeared on p.11 of the May 22, 2025 edition of the Daily Nexus.

Print