Everyone is an expert now

AI sounds like an intellectual authority

When conversing with AI it reminds of talking to someone who is an intellectual authority on a broad range of topics. It has knowledge about everything and can confidently and eloquently suggest a path forward in each domain.

That is, until they talk about something you are a domain expert in. This happens all the time to me. Whenever I discuss a topic I am deeply embedded in, I see so many things wrong with the AI's response. At the same time, when I discuss something I have no clue about, it feels very, very easy to take the eloquent opinion as correct expertise.

This is called the "Gell-Mann Amnesia effect"

The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about.

https://en.wiktionary.org/wiki/Gell-Mann_Amnesia_effect

So this is already pretty scary, but maybe manageable if we learn to distrust AI or AI gets better at citing its sources and so on.

But it comes with an even bigger problem.

everyone is an expert now

By extension everyone will now sound like an intellectual authority, a true expert, in any given area. At least to anyone else who is not an expert. And even if you are an expert, it takes effort to "debunk" fake expertise now.

Before we had the Dunning–Kruger effect: dunning https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

Someone got a bit of knowledge, was highly confident to know everything, but others could easily tell that the person is not an actual expert. And even to get to the "Mount Stupid", they had to put in some work.

Now everyone can sound like the "Guru" on the competence scale, even if their actual competence is still at 0.

With Dunning-Kruger incompetence was visible, now AI removes that visibility.

day to day impact

Let's say your colleague wrote a technical document, an RFC. It sounds absolutely compelling on first read. Expert language is used, all presented in a clear and good format.

In pre-AI times you could already be confident the person had some clue what they are talking about. They somehow had to build the knowledge to learn the domain language, to learn how to sound like an expert in that area. Even if you are not an expert in that specific area, you can trust the expert writer and learn something from the document.

Now everyone sounds like that expert. It is no hurdle at all to use tech domain jargon language.

Modern superscalar processors speculatively dispatch micro-ops from reservation stations to pipelined ALUs with carry-lookahead adders and Booth-encoded multipliers, resolving dependency chains through register renaming and physical register file indirection. The core scaling bottleneck is that adding functional units demands quadratic growth in bypass forwarding networks, compounding wire delay and power density beyond what transistor budgets alone can solve.

Today I do not know if the person who posted this is an expert or has no clue at all. An expert could tell you if it makes sense, but this takes time and effort. It is maybe similar to conspiracy theories: so much work to debunk and so easy to create.

And why wouldn't your colleague chose AI to write the RFC? It is the easy, lazy path forward, who doesn't want to be an expert without putting the work in?

The problem is when nobody of the reviewers has the actual expertise or nergy to challenge it. The RFC gets approved, it is shipped to production and six months later you discover the design was fundamentally flawed. The cost of fake expertise isn't visible at the time of writing, it shows up later.

This all creates so much exhaustion on the side of the reader, always having to be wary if a one-shot prompt or an actual thought process is behind content.

where does that leave us

For one, reading content from people you trust seems like the obvious answer. People you know or who have a reputation. But what about new people? People you do not know? New colleagues? I think we have to transition back to more synchronous conversations to establish this trust. Talk to people, let them explain their thoughts, this might help.

But still, even if it is blatantly obvious that other people's content is coming from AI and not from them, I do not know yet what to do in a work context. Spend hours "debunking" it? Constructively improving it (my comments will be fed to the agent then anyway)? Or calling the person out (but can you be sure)? Can we develop towards a stronger "reputation system", where breaking this trust is penalised?

I don't know yet what todo, but ignoring the problem seems not like a good path forward.