Job descriptions for software roles have changed faster in the last eighteen months than they did over the previous decade. Recruiters are asking for skills that didn’t appear on postings two years ago, and some of what used to be required has quietly fallen off the list. When’s the last time you worried about citing your aptitude for Microsoft Word on a resume?

The numbers move with the shift. AI-related skills appeared on just over 5% of tech job postings in 2024 and climbed past 9% in 2025, according to CIO’s roundup of the year’s fastest-growing IT skills. Most of that climb is recruiters racing to catch up with how the job has actually changed.

If recruiters are screening for different things, the practical question for someone newer to the field gets specific quickly: what exactly are they screening for, and how does a next gen developer actually build it?

AI Fluency Is Now the Baseline

Listing “prompt engineering” on a resume used to be a differentiator. Now it lands closer to listing Microsoft Word or Google Docs. Hiring managers assume it, just like they assume the candidate has actually used at least one AI coding tool and has an opinion about which one fits which job.

AI fluency on its own does not get a candidate to an interview by itself anymore. Recruiters want to see it paired with something more specific, and the sections below cover what that usually means.

Specification Precision

The first thing recruiters screen for, often without naming it directly, is whether a candidate can tell an AI exactly what they want. Some hiring teams call this specification precision. The mechanics are pretty mundane: a vague prompt gets vague code back, while a more specific one that spells out the edge cases tends to produce something workable on the first try. The gap there is mostly about thoroughness, not vocabulary.

Candidates who can write a clear specification, including the edge cases, the inputs and outputs, and what failure should look like, work much faster with AI tools than candidates who can’t. Some hiring teams now test for this directly. An interview test or take-home assignment might ask a candidate to write the prompt they would use to build a feature and then grade the output the AI produces from it. Three years ago, that test did not exist.

Auditing What the AI Produces

AI-generated code looks correct more often than it actually is, and a candidate who can spot the bugs, security holes, and architectural decisions that will cause problems six months from now is worth a lot more than one who accepts whatever the model outputs.

What auditing looks like in practice: reading code carefully and asking whether it actually does what it claims, catching things like database queries that are open to injection or authentication logic with a quiet hole in it, and recognizing when the AI has confidently invented a function or library that does not exist. None of this is exotic or sexy work. Senior engineers have always done it. The volume has changed, and the speed.

For someone earlier in their career, the most accessible version of this skill is the habit of asking the AI to walk through its own work. Prompts like “what would break this code” or “where might this fail at scale” usually get surprisingly honest answers from current models. Doing that consistently, and then actually verifying the answers, is how the skill builds over time.

Tool Fluency

Recruiters notice when a candidate can name specific tools and explain the tradeoffs between them. They care much less about which tools end up on the list, and much more about whether the candidate has actually used them or not.

In practice, fluency means working knowledge of at least one AI-powered editor (Cursor and Antigravity are common), comfort with one of the major conversational AIs for both generating code and thinking out loud, and some exposure to terminal-based AI coding tools when projects get more involved. Knowing which model is better at which task is becoming its own competency. A candidate who can say something like “I draft in one model, code-review in another, and use a third for explaining unfamiliar code” signals a sophistication that recruiters are starting to listen for.

Shipping End-to-End

The strongest hiring signal in 2026 is a candidate who can take a project from idea to deployment without needing someone else to handle the last mile. End-to-end ownership used to be senior-engineer territory. AI tooling has compressed the timeline enough that early-career candidates can now make a real claim to it on small projects.

End-to-end means setting up version control, deploying to a real domain, handling basic authentication when the project calls for it, and producing something a stranger can actually use. Three small finished projects beat one ambitious half-finished one in almost every portfolio review according to recruiters. Many new coders hear it and build the ambitious half-finished one anyway. The ones who don’t tend to move through hiring a lot faster.

Foundations

Python and JavaScript still appear on a remarkable number of postings. The reason hiring managers care has shifted, though. They aren’t expecting candidates to write thousands of lines of code by hand anymore. They want someone who can read, audit, and extend the code an AI wrote first.

For new coders, this changes the ROI on what to study. Memorizing syntax pays off less than it used to. Reading code carefully and understanding why it’s structured a certain way has a higher payoff. A candidate who can look at an AI-generated function, explain in plain language what it does, and predict what would happen if a particular line changed, is doing exactly the work the job now expects.

What a Portfolio Looks Like Now

The shape of a hireable portfolio has shifted in the last three years. Tutorial completions and clone-of-X projects don’t carry the weight they used to. The portfolios that get read carefully tend to share a few traits.

The first is real projects the candidate actually uses. Something built to fix a problem in the candidate’s own life lands better than a generic to-do app, partly because the motivation is harder to fake. A deployed, finished, slightly idiosyncratic project signals that the candidate has the focus to identify a solution to a problem, the ambition to code it, and the patience to get it live.

The second is public writing about what got built and why. DEV posts, blog entries, even GitHub READMEs that walk through the design choices behind a project demonstrate the communication skills real engineering teams care about. A candidate who can explain their technical choices in writing is usually also the one who can review code well and handle a hard design conversation without making it worse.

The third is some evidence of collaboration: hackathons, open source contributions, group projects, anything that shows the candidate has worked with other people on code. MLH events and similar programs partly exist for this reason. What comes out of a weekend hackathon usually isn’t production quality, but it does show that the candidate can work with strangers, under time pressure, on something real.

Where to Start

For a new coder wondering how to actually start building these skills, the path is shorter than it looks. Build something small end-to-end. Write about what got built. Bring the next project to a hackathon, or to a community where other people are also figuring it out. Then do it again.

MLH hackathons are a natural place to run this loop in public. DEV is where the writing tends to happen, and both are free and full of people earlier in their careers than they’d often let on. The difference between a candidate who gets called back and one who doesn’t is rarely raw talent. It’s usually whether they’ve built things, talked about what they built, and kept going.

Frequently Asked Questions

Do I still need a computer science degree?

Most postings don’t require one. Plenty still list it as preferred, though. The pattern in recent hiring reports is that shipped projects can increasingly stand in for the credential. Candidates without degrees who can show real work often clear the same screening bar that degree holders are assumed to pass on credentials alone.

Is prompt engineering really a job skill or a buzzword?

Both. The term has been overused, and a lot of what gets called prompt engineering is really just being thorough. The underlying skill is real, though. Writing clear specifications, anticipating edge cases, and steering a model toward useful output genuinely separates productive AI users from frustrated ones, and most recruiters can tell the difference quickly.

Which AI tool should a beginner learn first?

The answer matters less than the question implies. Pick any of the major conversational AI tools, use it seriously on a real project for a few weeks, then try a second one for comparison. Recruiters care less about loyalty to a particular product and more about whether a candidate can think critically about which tool fits which job. Working with two or three tools and being able to explain the differences carries more weight than deep specialization in one.

How do hackathons help with hiring?

A hackathon produces a few things recruiters value: a project that actually shipped under pressure, evidence that the candidate can work with other people on code, and a story worth telling in an interview. Hiring managers ask about projects because the answers reveal how a candidate thinks and how they handle the parts that didn’t go well. A hackathon project is almost custom-built for that kind of question. MLH runs hundreds of hackathons a year, and most are free to enter.

What about certifications?

Certifications carry weight in some contexts, particularly cloud platforms like AWS and Google Cloud, where the major ones can correlate with meaningful salary bumps. For most software roles, they’re a secondary signal. Paired with deployed projects, a certification helps. Without those projects, it tends to count for less than a portfolio with no certifications at all.

Will AI replace these jobs in five years?

The honest answer is that nobody knows. The current trend is that AI is changing what software work looks like rather than reducing the total amount of it. People who can collaborate effectively with AI tools are seeing their productivity rise, and the scope of what they can build keeps expanding. Candidates who learn to work alongside these tools have a real chance of being more valuable in five years, not less.