Black and white photograph of the right hemisphere of human brain with ruler measurement

From “Description of the Brain of Mr. Charles Babbage, F.R.S,” by V. Horsley in Philosophical Transactions of the Royal Society of London. Series B, Containing Papers of a Biological Character (1896-1934) (1909) | Royal Society of London / Public Domain


Sam Altman has claimed that by the end of this year, OpenAI will be capable of “truly astonishing cognitive tasks.” But what exactly does “cognition” mean in the context of artificial intelligence? As the sophistication of such technologies, our dependence on them, and the rhetoric used to sell them escalates, it’s vital we grapple with this question. Earlier this year, Luciano Floridi, founding director of the Digital Ethics Center at Yale University, joined Alexandros Schismenos, post-doctoral researcher on the philosophy of artificial intelligence at the Aristotle University of Thessaloniki, and Giannis Perperidis, adjunct lecturer on the philosophy of AI at the Ionian University in Corfu, for an online conversation about the present and future of AI technology—and why solving a puzzle is still not the same as understanding it.


Giannis Perperidis: In 2016, you shared the stage with John Searle who generally argued that computers, and by extension artificial intelligence, operate purely on syntactic functions and lack the ability to create or process meaning—something he considered fundamental to what makes us human. Do you believe that the ability to create meaning and process meaning is indeed a distinctive characteristic of human beings? And if yes, in light of today’s advances of neural networks, do you believe that Searle’s critique still holds relevance?

Luciani Floridi: Yes. On the first point, I think that the ability to produce meaning starts, on some levels, at the biological stage. Dolphins, dogs, cats, birds, already have some level of what we may call semantics. Symbols that mean something, and sometimes mean something different depending on the circumstances. When we say that semantics or meaning is uniquely human, what we mean is the full features, all the characteristics of having a meaningful understanding of the world. So it’s not just a matter of sending a signal to another member of your species, which is more like a very simplified and elementary communication. We all know that many species use their special codes to communicate, like bees dance to communicate the location of food sources to their nest-mates. But in our case, we use language to make sense of the world, and that’s a different sense of semantics. That’s what I mean by semantics—Language, but also many other semiotic expressions, like music. It could be pictures. We use them not just to tell people that, let’s say, there’s a danger in the jungle; it is a way of understanding and making sense that something is a jungle, and not a forest or a garden. Semantics is how we interpret reality and make sense of us in it. And that is world-making or conceptualization of reality. That’s what I mean by semantics, in the strong sense. But that’s typically and, as far as we know, only human.

It is also thanks to semantics that we are not constantly here and now. We can be in a different space, in a different time. And all of a sudden, not. So that’s the strong sense of semantics that is uniquely human as far as we know. There might be, who knows, other species in the universe. The universe is a big place, but we haven’t encountered them. Anything like us or better than us, like angels and gods, may have that kind of semantics.

If we distinguish between semantics as the ability to transfer signals, versus semantics as a way of making sense, and giving, a narrative comprehension of the world, the latter is typical, and uniquely, human. I think it requires—and I’m not sure whether neuroscientists would agree—a special detachment from the world, almost like being a little bit of a broken mechanism. Computers are not, if you pardon the analogy, broken enough. Likewise, animals, or biological agents that surround us, are not broken enough.

So we see the world not in a direct, first-gear, here and now way, but as something else. It’s that something that stands against us. That’s the Gegenstand—the object that confronts us—and the Ding an sich, the thing-in-itself, understood as a source of our constructions, that make them possible, or allows them. The thing-in-itself confronts us as a constraining affordance. So we react to the world through meaning and semantics—hacking language as a communication code to transform it into a tool of sense-making. In this way, we can couch reality in meaningful flows, that make it then more understandable, perhaps even more terrifying, because we can see things where there are none, because of the meaning we can attribute to clouds and trees, to conversations and gestures by a friend and so on. We are narrative agents, and the only narrative agents we know of in the universe—so far. 

When John Searle says that computers are syntactic engines, he is absolutely right. Any kind of computational entity, any Turing machine, including any form of AI, treats data in terms of necessary deductions or probabilities. It is the distinction between syntax (how to combine elements according to which regularities) and semantics (what the elements and their combination may mean). Data processing systems can do extraordinary things syntactically, not semantically. Our age will go down in history as the age that discovered how much one can do by syntax in context where we thought only semantics could produce what we want.

Does Searle’s criticism still stand? Absolutely. And I think that all the rhetoric and the sometimes willfully obscure remarks made about artificial intelligence are either naive or misled. They might be doing bad philosophy, like some computer scientists, and I have a couple of people in mind, very famous. It’s not that they don’t know the technology. They do not have a good epistemological understanding of it. Great engineers may be terrible, terrible philosophers.

And the fact that we hear so much about this artificial intelligence being able to “understand” is pretty much our own rhetorical creation. So it’s like looking at clouds and seeing faces in the clouds or looking at trees and thinking, Well, if they move, grow, and whisper in the forest, they must have wishes, emotions, and plans. There must be more forces in this universe. Unfortunately, that’s not true, and it’s not true of AI either. It’s a new form of animism that we should not endorse.

When we were culturally younger, we couldn’t resist the temptation to imagine that an earthquake was a manifestation of a deity. However, an earthquake doesn’t communicate with us; it doesn’t need to. But when you have an object in front of you that responds to your questions, it seems as though the object understands. This interaction is even more tempting, it pushes us to believe in some deeper, more conceptual understanding. But it is an illusion, a trick that happens at the interface.

Well, you know, the real point is that we discovered there are different ways to reach the same goal. We might arrive at the same destination through different routes. It’s like washing the dishes: I can do it or a dishwasher can do it. It doesn’t mean we do it the same way, and it doesn’t mean we are the same kind of entity. In both cases, the dishes end up clean. What we see today are these effects produced by chat repeating our discussions, and you can’t tell the difference between them. But is it the same process? The same source? No. The outcome is the same: clean dishes. If you come to my house and see clean dishes, you won’t know whether I or the dishwasher cleaned them. Therefore, what can you philosophically infer from that? If you know, you might infer that we have different ways of doing the dishes. The machine does it in a unique way very different from the individual effort. So today, what we have discovered is that content, visual, I could say, and written content, can be produced in two different ways. It says nothing, and I mean nothing about the nature of the source and the nature of the process. Now this is where we get completely lost. And that’s why I think reminding people of the Chinese box argument, et cetera, is a good idea. It’s an intuitive way of describing what happens when you have processing of data without any understanding and yet the outcome looks perfectly fine.

Imagine there’s a huge transparent screen between you and an artificial intelligence system. You both have a million-piece puzzle to assemble on the screen. As the machine starts putting the puzzle together, you begin to see an image emerging, let’s say The School of Athens. Figures start to take shape, and you might think, “Whoever is assembling this puzzle knows exactly where each piece goes, understands that blue represents the sky, that this part depicts part of the building, and this one maybe a book.” It appears as though the AI system comprehends the entire picture. It seems impossible to solve a million-piece puzzle without understanding the meaning of each piece and where each should go without knowing the original picture. However, on the other side, the AI sees only white puzzle pieces. You see the colors, the shapes the full picture, and where they should go, perhaps following some simple strategies (we all look for the four corners), but the AI is dealing just with the shapes and how well they fit with each other. You are dealing with the whole puzzle semantically, the AI system does it syntactically. It figures out which piece fits with which purely through millions of computational steps and refinements and by learning from the mere shapes, without any real understanding. This is statistical pattern recognition. If we watch someone assembling a puzzle, it’s almost irresistible to believe they understand what they’re doing. But if you could see from the AI’s syntactical perspective, so to speak, you would see only white pieces put together with other white pieces, and realize it’s just matching shapes, not understanding the image. We must understand that what is really going on is an amazing transformation in content production that is utterly 100 percent percent syntactic and statistical and has got nothing to do with semantics or understanding. If we don’t get this, we’ll get pretty much lost.

Alexandros Schismenos: I would like to point out that recently, during the presentation of, OpenAI’s o3 app, Sam Altman claimed that by the end of 2025, we will have “systems that can do truly astonishing cognitive tasks.” Despite the falsity of such claims, companies gain more political and economic influence. Do you think that this neoliberal drive towards effectiveness and capital growth could devalue human intelligence and human agency?

Floridi: I’m probably as concerned as you are about the power that these companies have. It is getting worse because the same companies have such a vast amount of data on people. They’re also the same companies that are developing the power to exploit that data to create AI tools. It’s not an accident. Now the main players are Microsoft, Google, Apple, Meta, Amazon, and very few others, and if you look, the same big companies have the data to train all these AI models. The undermining of privacy through data has now become the undermining of autonomy through AI.

So when I present challenges—call them anthropological challenges in a philosophical sense—for each of us, I point out that AI challenges even further our identity in terms of who we are, our privacy, our data, and our profiles on social media networks and the Internet. But we face an additional challenge that reinforces these concerns because it challenges our identity in terms of what we can do, not just who we are or what we can be—that is, our ontological identity. AI also challenges our pragmatic identity—the kind of entity capable of performing some actions, being subjected to specific processes, and having the autonomy and freedom to enact particular behaviors. This dual challenge tests our identity in terms of both who we are and what we can do, as well as why we are really doing what we are doing.

Not only is this worrisome culturally and philosophically, but it is also concerning in terms of social and political issues. This transformation is happening not just at a cultural level—which would be bad enough—but in the hands of a few companies with immense power. These companies, particularly in the current political climate, have shown their true colors. From Amazon to Apple, and from Meta to Google to Microsoft, they are revealing that their primary concern as economic agents is only profit, often at the expense of environmental sustainability and social acceptability. This is deeply troubling. It should be resisted. We in Europe should create alternatives because if you have only a massive offer from a small and powerful oligopoly, this is how philosophical anthropology is changing. We are reconsidering who we are in terms of identity, and, therefore, what kind of identity we can truly have. If it were just an intellectual, philosophical debate, I would be more excited conceptually and less concerned politically. But it is a social and political issue, which makes me much more worried than intellectually curious.

Our concern stems from two or three simple observations that everyone can understand. First, accountability. The people driving these transformations are socio-politically unaccountable, or rather, they are accountable only to the markets. This type of economic accountability reinforces the damage caused by their actions. The more profitable these companies are, the more the market rewards them, and the more sustainability and social acceptability issues are disregarded. This perverse mechanism promotes the neglect of social and environmental values that we wish to prioritize.

Second, the lack of competition. Sometimes, competition can counterbalance the lack of accountability. If there’s ample competition, deficiencies in socio-political accountability could be mitigated by market forces, as companies would have to be mindful of their social and environmental responsibilities to stay competitive, given their customers’ choices. For example, if there are multiple restaurants downtown, competition ensures better services. Similarly, if there is healthy competition among companies, it could lead to better outcomes for humanity and nature, even in the absence of stringent regulations. However, the level of competition among these companies is utterly insufficient to drive substantial change. They do not compete intensely enough to prioritize social and environmental values over profit margins.

Therefore, the lack of accountability and competition makes the power behind this challenge to our philosophical anthropology dangerous. The danger is not a 1984-style dystopian world but rather a Brave New World scenario. It’s a world where people passively consume entertainment, follow social media trends, and are dictated about what to do, eat, enjoy, and whom to befriend. This world has been stripped of human values, and people become content with merely satisfying their consumeristic desires.

For anyone seeking deeper meaning in life than just acquiring new gadgets or enjoying a good meal—which are, of course, very nice things to have—the real issue becomes manipulation. This is the third point. If human values are removed, along with accountability and competition, those in power can dictate the course of the world, leading to an unsustainable and life-destroying scenario.

Currently, we are depleting our resources and heading toward disaster, as exemplified by the situation in California. This is the ultimate outcome of our current path.

This is not pessimism. It is a realistic assessment of the current crisis. If we acknowledge the challenge and take action—becoming more militant, explaining, educating, and protesting—we can change direction. Especially in the EU. There is a mission here to stop and alter our course. There is a war to win, and work to be done. History is what we make it, and we can change it. We need to convince people that our current trajectory is self-destructive. I don’t see enough effort in this regard, so I urge everyone to join forces, be braver, and fight harder. The story’s end doesn’t have to be bleak if we take action now.

Schismenos: You have argued in our favor for a relationship between the “green” and the “blue”— ecology and digital technologies—on the grounds that we need to step in and that “philosophy is conceptual design at its best.” You have drawn attention to the ecological and environmental costs of using AI applications, and you have also participated in the European discourse around regulations. Given the Trump administration and the drive for these companies to go unchecked from political actors in the United States, what would your proposition be as regards how to move forward with this marriage between the ecology and digital technologies?

Floridi: I would welcome, support, and encourage a stronger European project. We, as Europeans, have the resources and understanding to implement a different approach, a different course. People often criticize, but I see it as a feature, not a bug, that the European Union is the only democratic entity on this planet that can plan things years in advance. The kind of legislation we are capable of formulating, refining, and changing may take five, six, even ten years. No other democratic government in the world can do this. I often joke that the only entities capable of long-term planning are Brussels, Beijing, and the Vatican. These are the only three that can genuinely look ahead and know what they will do for the next decade. We need to leverage this ability. We need to capitalize on the fact that we do not completely equate politics with economics.

There was a time when politics, at least in theory, was expected to focus on broader societal goals. Since World War II, modernity has increasingly transformed politics into economic policy concerning money, taxation, resources, jobs, unemployment, productivity, and GDP. These are essential issues, of course, but not sufficient. Europe still has, unlike the United States and in contrast to China, a vision of the kind of society it wants to build. This is priceless. If we leverage events in any possible way—being Machiavellian if necessary—we must ensure that European liberal democracy, based on human dignity, rights, and values, prioritizes a society where accountability, competition, and environmental and societal values are paramount. These should be our primary goals, with everything else serving as a means to achieve them.

Our approach to economics should be a means to achieve the kind of society we want. This perspective is like telling a child that a job is not just for earning money; you earn money to have a particular life, but life is not about earning money. If you extrapolate this in a Platonic sense, society’s goal should not be solely economic growth. A society focused solely on increasing GDP can be richer today than yesterday but still miserable if human rights and environmental health are neglected. The strong tendency in China and the negligence in the US towards social and environmental values show us the danger of such an approach.

The European project is difficult, yes—the most complicated political design ever attempted by humanity. People should remember that when we occasionally fail, we fail while aiming for something extraordinarily high. We are uniting nations that were at war within living memory, speaking different languages, and having different cultures. This remarkable union is ambitious; nothing else compares. Even if we sometimes fail, the strides we make are incredibly significant.

Despite criticism that Europe doesn’t innovate enough, the reality is different. When considering Mario Draghi’s focus on economic growth, it’s crucial to remember that economic growth is just one necessary and crucial chapter in a bigger book. The other chapters, such as human rights, education, environment, culture, welfare, complete the European vision.

We should be proud to identify as Europeans because our project transcends national boundaries and focuses on creating a just society based on shared values. However, the European project needs more resolve and courage. We must remember that our positive vision stems from the painful lessons of our past—atrocities like those committed by Nazi Germany, Fascist Italy, and many other regimes. We have the best political project because we have witnessed and overcome the worst.

To strengthen our project, we must be inclusive yet firm. Europe should include countries based on shared values rather than just geographic location. Consider including countries around the Mediterranean or even Canada. Conversely, if a country trends towards fascism, it should be possible to suspend its membership and ultimately expel it. We must be more rigorous about membership criteria if we want a robust European project capable of leading.

Perperidis: Some would answer that this is like a West-centric approach or a Euro-centric approach and that (to connect it to AI) if you dismiss AI as lacking, let’s say, real intelligence, it reflects a kind of human-centric or even Western-centric bias. Some would critique the concept of intelligence itself and the values around it, claiming that it was shaped by Western colonialism and dominance, which was often used to marginalize or dehumanize the “others” of Europe or the “others” of the West.

Floridi: I think I would deny some of the premises. We know we don’t need AI to be intelligent. We don’t even have one definition of intelligence—there are many. Obviously, every discipline has its own. But last time I counted, we had more than twenty. You have musical intelligence, visual intelligence, intuitive intelligence, social intelligence, behavioral intelligence, mathematical intelligence, and you keep going. Intelligence is one of those words that gets used in many ways. In many ways, there is also a neocolonialist interpretation of intelligence. I’m sure there is a sense in which intelligence is a neocolonial concept, but it wouldn’t be the point in the actual debate about whether AI is or is not intelligence. What we should stress when we when we compare AI to human intelligence is what is shared among all of us, all human beings—and that is not Western or Northern or developed countries.

In the end, accusations of someone who speaks of perspectivism is a pointless remark because, obviously, if you speak, it is your voice and nobody else’s. The question is whether what you’re saying is not just about your perspective. These are syntactic engines. They’re not semantic engines. Our human way of using languages in every culture, at every time, in every corner of the world, wherever there is humanity at work, has always been not just as a primary tool for communication, but has been a way of creating and shaping our understanding of reality in any culture, in any civilization. If you meet a human being, whoever that human being is, that human being has a language, and that language is a tool to shape and make sense of earlier.

By making that particular analogy, one reduces humanity to a gadget. Imagine someone saying, “I’m going to give some voice to their culture” in the same way as “I’m going to give some voice to the artificial intelligence.” Well, that is exactly what you want to avoid. That would be the idea that there are masses and slaves. Instead, we want to say that the whole humanity in on the same side. We better not mix humanity with technology. Do not transform humans into means to an end, to use Kant. Then we’re on the right track.

A lot of the discourse on AI technology and on digital revolution is biased and can often be unethical. However, let me stress that most problems, mistakes, limits are not Eurocentric but US-centric. Let me give you an example. All the discussion of AI security is US-centric. In the US, if you spend enough time here, you realize that they barely know that there’s another world out there. Now if you have that kind of culture, the risk is forgetting or even never knowing that the world has many other cultures and has other ways of understanding, thinking, preferences, choices, visions. Let’s not confuse this with an argument in favor of AI as a sci-fi technology. Because then what we’re doing is not improving the situation but breaking the mechanism that makes us able to recognize the problem: What is human-like intelligence, which we recognize with every human being who ever lived on this planet, and what is not, which we know and do not identify with, namely any gadget in the world. I know the kind of discourse you’re referring to; I think its main effect becomes is a lack of discussion of the real issues. It is distracting.

Schismenos: I find your notion of historical responsibility in the context of historical experience in European societies as well as your definition of the European project on the basis of values very interesting. Regarding AI, do you think this kind of project requires a bigger engagement of the civil society?

Floridi: This is almost a given. The real difficulty lies in how we achieve it. There’s no question that the way forward is diverse, inclusive, and collaborative. We know that good politics involves bringing people on board. The question is, how do we do that? It’s not as simple as asking a hundred million people to vote and then doing whatever they decide. That approach would be disastrous because it doesn’t mediate or provide trade-offs between equally valuable points. It simplifies issues to a majority vote, to winners take all, but democracy is the defense of the minorities, not the dictatorship of the majority. Civil society needs to be involved. How do we achieve that? What mechanisms can balance allowing voices to be heard while finding the best compromise that leaves no one too unhappy? Notice the double negative—no one too unhappy. It doesn’t mean everyone is happy, or that 51 percent percent are happy while 49 percent percent are unhappy. It means that 100 percent percent of the population feels that, although it’s not exactly what they wanted, they can accept it as a decent compromise. Your needs and my needs, your values and my values, all navigated together. This is the best of all possible worlds because it preserves a variety of views without repressing any side.

The majority rule, like the Brexit referendum, has shown to be disastrous for everyone involved. That is not effective politics. Politics should transform the possible into the preferable. The best outcomes arise when we acknowledge the variety of voices—civil society, different groups, citizens, parties, organizations, industries, and interest groups—and reach the best compromise and trade-offs that are possible at the time.

Involving civil society at its best means hearing from all these groups and finding a balance. What is bad is the rhetoric of asking specific groups, like taxi drivers, if they want a reform whatever directly impacts only them. They’ll likely say no, but involving civil society means seeking a compromise that considers their input along with broader perspectives. What often happens instead is that lobbying groups dominate, pushing their vested interests above all others. This isn’t true civil society involvement; it’s catering to specific interests for political gain.

At some point, this approach results in a fragmented policy with no cohesive project, just various vested interests being catered to for re-election purposes. True politics should strive for a different vision—one where society acts as a choir rather than individual singers.

Perperidis: I know you’re a critic of the transhumanist agenda. But such visions of technologies, such imaginaries, keep returning. How can we answer to them today?

Floridi: There are many approaches we can take. One of the best is adopting a Socratic attitude—we can make fun of it. This approach lies between ignoring it—which is risky because as intellectuals, we shouldn’t let silly ideas propagate unchecked—and outright criticizing it. Ignoring it might lead to dangers since these ideas could mislead or contribute to societal issues like the erosion of autonomy. If people start believing that AI will dominate and manage the world, they might stop thinking critically.

Criticizing such ideas can also be risky because it means engaging with them. I don’t want to see a situation where an astrophysicist debates an astrologer, elevating the astrologer’s viewpoint to an undeserved level of credibility. Engaging in criticism makes it seem as though these ideas are worth debating, which they are not. Effective criticism starts with the recognition of the value of what you are criticizing. You don’t spend time reviewing a book you believe is complete rubbish; even calling it rubbish requires an initial engagement that acknowledges its existence. The “singularity” concept isn’t even worth that—it’s akin to astrology. So, we shouldn’t ignore it entirely, nor should we engage with it seriously. Instead, we can undermine it through humor and Socratic irony.

Making fun of these ideas exposes their absurdity without giving them undue credibility. It’s a way to engage without taking them seriously. I’d love to see this method used more often—engaging with humor, cracking jokes, and showing how ridiculous these notions are. When people laugh, they move on, and the ideas lose their foothold. For instance, if someone claims that AI will dominate the universe, we can highlight how AI struggles with simple tasks, making the notion laughable.

This disrespectful but fun engagement can prevent these ideas from gaining serious traction. By being ironically disrespectful, we engage without conceding credibility, ensuring that ill-conceived ideas are seen for what they are—naive at best and dangerous socio-political projects at worst.


This conversation was first published in Greek in InScience on March 22, 2025. It has been edited for length and clarity.