The Honest AI Brief For Ambitious Early Career Lawyers

What I've been reading and what it means for your career, now.

I want to tell you about the first time that a lawyer used the internet.

Not in a “where-were-you-when-the-insert-celebrity-name-died” way. More in a “this-is-the-closest-comparison-I-have way”.

Let's not forget that there was a moment, somewhere in the mid-nineties, where using the internet was still optional. Where you could choose not to engage with it and where the partners at your firm could wave it away as a fad.

There was a time not that long ago where the people asking questions about it were considered, at best, enthusiastic and at worst, a bit embarrassing.

But the lawyers who went all in anyway? They built different careers. They were faster, more connected and more useful to their clients. They had a better weapon than everyone else.

The ones who waited, or resisted, or decided to hold on just a little longer to the way things had always been done? They caught up eventually, but they spent years behind. They worked harder for the same output. They were always in catch-up mode.

The same thing when the Blackberry came along, and then the iPhone, and then cloud computing.

I have been thinking about this a lot lately.

Because over the past few months I have been doing what I always do when something starts to feel significant: reading, listening, talking to people, sitting with the discomfort of not having a clean understanding yet and eventually arriving somewhere I feel comfortable enough to share. AI is that thing right now. Not as hype and not as panic, but as a genuine inflection point that I think we owe it to ourselves to look at clearly.

Four things I have read or listened to recently crystallised something for me. Together, they form a picture I think matters for anyone early in their legal career.

The data is in and it is striking.

Niki Black is the Principal Legal Insight Strategist at 8am and her team recently released the 2026 Legal Industry Report. I was listening to her break it down on the Geek in Review podcast and a few numbers stopped me.

69% of legal professionals are now personally using general-purpose AI tools for work. That is more than double from last year. For context, this is a profession not historically known for sprinting toward new technology. The adoption has happened fast, and it has happened from the ground up, individual lawyers not waiting for permission.

Here is where it gets interesting, though. 43% of firms have no formal AI policy and no plans to create one. Only 9% are actively enforcing a written policy.

So you have a profession where more than two thirds of practitioners are using AI in their daily work, and less than one in ten are operating inside any kind of institutional framework for doing so. That gap is the defining tension of this moment. Lawyers are not waiting for governance to catch up. They are just going.

Which means the risk, the accountability and the professional judgment required to navigate all of this sits squarely with individuals, and right now, most of them are figuring it out alone.

Niki also made a point I thought was quietly significant: the pressure to change the traditional billing model is building, slowly, from exactly the places you would expect. Sophisticated clients like in-house counsel know what legal work should cost and are starting to demand that AI-driven efficiency flow through to their invoices. It is not a revolution yet. But it is coming, and the firms and lawyers who understand these tools at an operational level will be positioned very differently when it does.

Gary Vee says you are early, and he means it.

Gary Vaynerchuk has been banging the AI drum hard and his recent newsletter piece on how AI is changing the workforce is characteristically direct. His argument is not subtle: technology does not care about your opinions, it just keeps moving. The people who operate with a yes or a maybe mentality will win. The people who stay in a world of no will be left behind.

What I found more interesting than the urgency, though, is the framework he uses. He draws a distinction between architects and masons. The architect has the vision, designs the plan, oversees the execution. The mason lays the bricks. His argument is that AI is going to do most of the bricklaying from here, which means the question for everyone, and especially for anyone early in their career, is whether you are positioning yourself as an architect or a mason.

He also says something I know in my bones to be true: that what employers will actually be hiring for is humanity. Not skills in the traditional sense, because skill is a commodity and it is about to become an even bigger one. What is not a commodity is emotional intelligence. How you handle feedback. How you navigate adversity. How you actually engage with people. As the world becomes more AI-saturated, the most human will win.

I find that genuinely encouraging for early career lawyers as a cohort. Because the things we do well when we are doing them well are deeply human things. Judgement. Trust. The ability to hold complexity and ambiguity and still give someone a clear steer.

That does not go away. It becomes more valuable.

Allie K. Miller says to build real workflows and not just vibe.

Allie K. Miller runs AI with Allie, one of the most practically useful AI newsletters around. Her recent piece on the AI job threat, synthesising two significant research reports sitting at opposite ends of the disruption debate, is worth reading in full. But the through line I keep coming back to is this: the gap between people who use AI as a toy and people who use it as infrastructure is widening every week.

There is a difference between opening ChatGPT occasionally when you have a question and actually building workflows, integrating tools into the way you work, making AI a genuine operating partner in how you practise. The former gives you a slight edge. The latter changes the architecture of what you can deliver. Miller is emphatic on this: start seriously, not casually.

She is also clear-eyed about the uncertainty. Nobody actually knows whether the disruption will be fast or slow, sweeping or gradual. The two reports she analysed come to radically different conclusions about the timeline. But here is what both sides of that debate agree on: the people who understand these tools will be in a better position regardless of which scenario plays out. Whether the disruption is fast or slow, fluency with AI is protective.

The other thing she flags, and which I think is particularly relevant for lawyers, is that the places where humans still have a meaningful edge are exactly the places worth leaning into: judgement, taste, knowing what good looks like, relationship-building, navigating ambiguity, leading through uncertainty. These are not soft skills in the dismissive sense. These are the irreducible core of good legal work.

Profiles in Legal says we need to be in the room where it is designed

Profiles in Legal is a newsletter I respect for its sober, institutional lens and their recent piece on agentic AI was not breathless hype. It opened with a line I thought was quietly significant: this is not a technology newsletter, but occasionally a technological shift is so consequential for risk and accountability that it cannot be ignored.

Their focus is on agentic AI: the shift from AI that responds to prompts to AI that executes multi-step workflows, takes actions, interacts with systems and generates work products autonomously. The implication for legal risk is meaningful. When systems begin to act rather than passively react, exposure multiplies.

But the point that matters most for junior lawyers is this: AI does not shift responsibility. Accountability still sits with people. What this era actually does is put a premium on human strategic judgement, because someone still has to design the workflow, take responsibility for the risk it carries and make the calls the machine cannot. Legal needs to be embedded in the architecture of these systems from the start, not brought in at the auditing stage.

That means lawyers who understand how these tools work, who can engage substantively with governance questions and who can sit in rooms with technologists and product teams and C-suites and actually contribute, will be genuinely valuable in a way that lawyers who are waiting it out simply will not be.

So what does "building a workflow" actually look like in practice?

This is the question I get most often when I talk about AI with early career lawyers, and it is a fair one. The advice to use AI as infrastructure rather than a toy can feel abstract when you are not sure where to start.

Here is one concrete answer: AI Agent Skills.

A Skill is a portable, reusable set of instructions that teaches an AI assistant how to perform a specific task, your way. Think of it as encoding a piece of legal expertise into something the AI can load and apply on demand. You are not starting from scratch every time you open a chat window. You are giving the model a methodology.

Lawvable is building a hub specifically for legal Agent Skills. Their library includes things like an NDA reviewer that produces a clause-by-clause issue log with preferred redlines, fallbacks, rationales and deadlines, a contract review skill that flags deviations from your organisation's negotiation playbook, and a mediation preparation skill that identifies party positions, BATNA scenarios and zones of possible agreement. These are reusable, compatible with Claude, ChatGPT and Gemini and built by legal practitioners.

This is what Miller means by infrastructure rather than a toy.

A lawyer who has built or adopted a suite of Skills for the work they do regularly is not just using AI. They are practising differently. They are faster, more consistent and more able to take on volume without sacrificing quality. They are also, not coincidentally, developing the kind of operational fluency that makes them the person an organisation actually needs in the room when governance questions come up.

But as Niki Black's data makes clear: most firms are not providing this. The governance gap is real, the training gap is real and the policy gap is real.

Which means the lawyers who build this knowledge themselves, right now, are not just getting ahead. They are filling a vacuum that their institutions have left open.

What all four of them agree on.

Across very different voices, writing for very different audiences, a few things keep surfacing.

The first is that the technology is happening whether you engage with it or not, and the only variable is whether you are fluent when it matters.

This goes back to the internet lesson. The lawyers who got on the internet early were not necessarily the most technically gifted. They were the ones who decided that curiosity was a better bet than resistance.

The second is that the human things are not going away, they are being amplified. EQ, judgement, strategic thinking, the ability to hold relationships and trust and handle ambiguity: these become more precious as more of the procedural, the routine and the producible becomes automated.

This is good news for people paying attention to it. It is a strong signal about where to invest in yourself.

The third, and this is the one I want to be direct about for anyone in the early stages of their career: you have a genuine first-mover advantage right now, but it will be temporary. Being at the start of a technology shift when you are also at the start of your career is rare.

The lawyers who understand AI at an operational level, who can build workflows and engage with governance questions and sit comfortably in those conversations, will be the ones organisations need to hire, not fire.

So what do you actually do with this?

  • You start using these tools properly. Not as a party trick, not to generate a first draft and feel smug about it, but to genuinely understand what they can and cannot do.

  • You bring intellectual rigour to your prompting, because the quality of what you get out is a direct function of the quality and depth of what you put in.

  • You explore what Skills exist for the work you do and start thinking about how you would teach an AI to do a task the way you would want it done. That thinking process alone is clarifying.

  • You pay attention to what AI cannot do well yet and invest in being very good at exactly those things.

  • You stay close to the conversation about how AI is being integrated into legal work specifically, because that is where the governance questions, the accountability frameworks and the new disciplines are being built. That room needs lawyers who know what they are talking about.

Technology is a tool and it has always been a tool. The lawyers who made email their enemy did not stop email from arriving. The ones who made the internet optional did not stop the internet from becoming mandatory.

The question in those moments is always the same: do you pick it up, or do you wait for everyone else to and then catch up at a sprint?

The internet arrived and some lawyers decided to wait and see. That is a valid strategy right until it is not.

You are early. Act like it.

Mel

~

Sources referenced: Gary Vaynerchuk, "How AI Is Changing the Workforce," Underpriced Actions (February 2026); Allie K. Miller, "The AI Job Threat: What to Know and What to Do Right Now," AI with Allie (March 3, 2026); Philip, Profiles in Legal (March 2026); Niki Black, The Geek in Review podcast, "Niki Black on AI Adoption, Billing Pressure, and the Governance Gap in Legal" (March 9, 2026); Lawvable, lawvable.com.

Previous
Previous

Ten Years as a Corporate Lawyer in Tech Gave Me a Radar I Can’t Switch Off

Next
Next

Your Law Degree Is a Product. Here Is How to Tell Whether It Is Future-Fit.