Catholic social teaching in a digital age

An interview with Megan Levis Scheirer

January 14, 2026

At a time when digital technologies like the Internet, social media, and artificial intelligence threaten to undermine democracy, erode personal privacy and agency, displace jobs, and increase inequality, the Institute for Social Concerns’ Megan Levis Scheirer is working to create more virtuous digital spaces by drawing on the resources of Catholic social teaching (CST).

Megan Levis Scheirer (center) leads a reading group on CST and artificial intelligence in the Geddes Hall Coffee House last fall

Along with colleagues in the Department of Computer Science and Engineering, Levis Scheirer previously applied CST as a roadmap for user engagement with digital technologies. Now she and her colleagues have turned their attention from technology use to technology design. By integrating CST at the software design level, they are providing a framework for developers to move beyond the attention economy toward a more human-centered digital future.

The Current recently sat down with Levis Scheirer to discuss her and her colleague’s research on CST in a digital age. The following interview is lightly edited for length and readability.

The Current: What does Catholic social teaching have to do with digital technology?

Levis Scheirer: Just as the factory system restructured our social fabric in the Industrial Revolution, we are experiencing a digital revolution in which digital technologies are restructuring the ways we communicate and engage with each other. In both cases, the Church is saying we need to pay attention to what it means to be a human person, how to support individual dignity, how we interact with others, and how to maintain a preferential option for the poor. 

The Current: How do you approach these questions both as an engineer and as someone wrestling with CST?

Levis Scheirer: A lot of my work involves developing software design principles. Technologies can encourage certain behaviors. My question is this: How can we use CST to be creative in the way we design technologies? We want to design them in ways that encourage virtue and the common good rather than just aiming for the “attention economy.”

Levis Scheirer describes how Catholic social teaching can help engineer for the common good

The Current: One of those principles is subsidiarity, which is not a word the average person encounters on a daily basis.

Levis Scheirer: Right. Subsidiarity is often misunderstood, alongside solidarity. Solidarity refers to our global interconnectedness. Subsidiarity, conversely, is about meeting needs locally, if possible. It says that organization and control should happen at the lowest level possible and only move up if needs cannot be met locally. In a paper my colleagues and I currently have under review, we argue that there is a “default setting” in a lot of modern software where control is held at a level above the user. For example, Google collects a lot of data anytime you send a message or use their search platform. We’ve been thinking about this through the lens of the open-source community and data privacy.

The Current: So subsidiarity isn’t just libertarianism; it’s discernment about which levels of control are most appropriate.

Levis Scheirer: Exactly. In the paper, we look at “baseline communication”—data sent even when you aren’t doing anything. Many people assume that if their browser or laptop is just sitting open, nothing is happening. We found that communication is happening on the back end constantly. This is historically new because our technologies are always online. Knowing this allows for discernment: You might not feel comfortable with that in a personal setting, but you might accept it in a work setting to collaborate effectively.

The Current: Another tenet of CST is the care for creation. How does your work address that?

Levis Scheirer: A big part of our work on developing design principles for AI is care for creation. Right now, everyone thinks of AI as Large Language Models (LLMs), which require massive data centers because they perform semantic matching across the entire Internet for every query. But since the 1980s, researchers have also developed small AI models trained to solve specific problems, requiring much less data and energy. I worry about the business model of big AI companies; the cost per search is so high that I could see these data centers filled with cobwebs in ten years, leaving us with just metal sitting in a field where beautiful land used to be.

The Current: What about the CST tenet of human dignity?

Levis Scheirer: Pope Francis’s 2025 document Antiqua et nova emphasizes that artificial intelligence is fundamentally different from human intelligence. There is a great line in that document about the power of communication during silence. Humans can sit quietly with you and know what you mean; artificial intelligence (AI) doesn’t know how to read silence.

In our paper on design patterns for the common good, we started with social media and are now developing patterns for AI. The first pattern we’re piloting is called “AI are not people.” Some platforms are good at reminding users that AI is not human, but others lean into anthropomorphizing the technology to create an emotional bond. We want to resist the idea that AI can become your romantic partner or close confidant. 

The Current: Some people view AI as the next Industrial Revolution, but it sounds like you view it as the next step of an ongoing digital revolution.

Levis Scheirer: Definitely. We think of AI as a feature of the Internet. It is trained on the Internet; it isn’t creating something totally new but is rather creating something out of information people have already posted online. Antiqua et Nova mentions the issue of “deskilling.” This happened in the Industrial Revolution with manual labor, but writers in the AI space are worried about “moral deskilling.” If we constantly ask chatbots to solve social or moral dilemmas for us, we lose our capacity to think deeply. We shouldn’t offload that deeply human work to models.

Levis Scheirer and Louisa Conwill introduce a CST framework for designing digital and social technologies

The Current: What you’re trying to do sounds countercultural. Since digital technology is not neutral, it feels like you almost have to break its current form to make it virtuous.

Levis Scheirer: That is a complicated question. I agree with Langdon Winner’s seminal argument that “artifacts have politics.” When technologies are built and implemented in a particular way, they encourage specific sets of values. However, technology is also unique because we can use it in ways other than what was intended; philosophers call this “dual use.”

The Current: Where do you plan to take this work going forward?

Levis Scheirer: I think the future of our work lies in looking at open-source, amateur, and even hacker communities. A lot of the coolest innovations on the Internet came not from Big Tech but from communities where making money isn’t the primary goal. Even OpenAI started as an open-source venture. There are also several fascinating Catholic AI startups. There is nothing that says we can’t rebuild these technologies to support CST.

Cory Doctorow points out that when a few small companies own and run the Internet, we can’t trust them to do what they say is good. I think the solution has to be a grassroots movement. It can happen within companies—we recently had an executive from Meta here who was a Notre Dame alum and very excited about our work—but most great innovation comes from individuals who just want to build something new, weird, and interesting. 

By democratizing digital technologies and rebuilding them from the grassroots, we can work to make them not only more weird and interesting but ultimately more virtuous as well.

This spring Levis Scheirer is teaching the course Technology & Justice at the Institute for Social Concerns as part of the institute’s Catholic Social Tradition Minor