bro, you're learning the wrong thing
ft. your roadmap to learning the right thing
Before we begin.
We finished 100+ hours of buildathons in 2026. Meghana built a medication reminder app. Abhinav built MantriAI. An AI-powered job-hunter, resume-editor and interview prep-tool. Shikha built an AI personal stylist.
Hundreds have built consumer-ready AI tools at our buildathons. Some, under 8 hours. Want to try it out for yourself? Join the next GrowthX next buildathon at your nearest city.
Today’s edition.
This is a special one from our Co-founder & CEO, Udayan. Wear your seat belts. We’re diving right into it.
it’s tuesday night.
someone in your slack just shared a new ai tool. “this is insane, everyone needs to learn this.” you click the link. watch the demo. add it to the list of 30 things you’ll get to eventually.
by friday there’s another one.
that tool your colleague shared?
irrelevant in a week.
replaced by something newer, shinier, with a better landing page. so don’t stress about it. seriously.
here’s what actually matters.
whether ai stays or something entirely new replaces it next year, the thing that grows your career doesn’t change. business impact.
not tools. not frameworks. not certificates. the ability to take a skill and use it to move a number the company cares about. revenue. cost. speed. retention. that’s been true for 50 years. it’ll be true for the next 50.
so instead of writing another “top ai tools” article, i mapped the ai skills that i believe will matter for the next year, across seven functions, and tied each one to the business outcome it creates. this comes from hundreds of personal calls with members at growthx (5,500 professionals, 50+ learning events every month) and tracking what people are getting hired and rejected for right now.
if a skill doesn’t move a business metric, it’s a hobby.
but what does “business impact” actually mean?
every business has an equation. even the fuzzy ones. and most people never write theirs down.
let’s take a simple one.
say you’re a creative strategist doing short-form video.
video views = number of posts × average views per post
then, break it down.
↳ number of posts = editing bandwidth + creative strategy bandwidth
↳ average views = hook quality × editing × topic relevance × timing
go deeper.
↳ ↳ editing bandwidth = editors available × hours per editor × edits per hour
↳ ↳ creative strategy bandwidth = strategists × ideas tested per week × approval speed
↳ ↳ hook quality = pattern interrupt strength × curiosity gap × first 3 second retention rate
↳ ↳ topic relevance = trend alignment × audience pain point match × search demand
↳ ↳ timing = posting hour × platform algorithm cycle × cultural moment
see how deep this goes?
each sub lever is something you can measure. and each one is something ai can potentially move.
now look at this. really look at it.
if you’re doing 10 shorts a month at 50k average views, that’s 5 lakh views monthly. want 50% growth? you either need 5 more shorts per month or your average views need to go up by 50%.
the bottleneck isn’t ideas. it’s editing bandwidth.
each short takes 4 days. ideation and scripting 3 to 4 hours. recording and production 1 day. editing and publishing 2 days. your team can only do so many in parallel.
can ai compress editing from 2 days to 4 hours? maybe. that’s the input lever. that’s where the skill matters. not “i learned ai” but “i used ai to move this specific number.”
here’s what i want you to do.
before you scroll to your function, write down your company’s growth equation. or at least your team’s OKR. break it into input levers. find the one that’s stuck. then look at the skills below and ask: which one unsticks my lever?
because skills are like tools.
you can use them across any input lever depending on the context of your business, the size of the company, the stage you’re at, and the market you’re in. two product managers at two different companies might both need “context engineering” but for completely different reasons.
i cover 7 functions:
design
product
engineering
marketing
generalists
HR / people
sales
scroll to your function. find your gaps. & build.
design
i’m going to be honest. i don’t know what to call this skill yet. but it might be the most important one in this entire article. first, the equation.
impact = reach × Δmetric × quality × velocity
break it down.
↳ reach = total people who see your asset (creative, landing page, product surface)
↳ Δmetric = absolute lift in the business metric the design targets (conversion, sign ups, activation)
↳ quality = attention captured × clarity
↳ velocity = experiments shipped per period ÷ planned cadence
you don’t own all four.
you own quality and velocity. reach is a marketing/product lever. Δmetric is a function of copy, price, and users.
but if you 10x your velocity, you run 10x more experiments, and your Δmetric improves because you’re finding better solutions faster.
the product, design, and engineer roles are merging. not in theory. right now. the boundaries are dissolving, and the designers who see it happening are moving fastest.
here’s what we’re seeing at @growthx_club
designers are pushing code directly to production. not handing off figma files to devs. not writing specs. raising actual PRs. they have the entire dev codebase set up on their machines. they’re building features, fixing ui bugs, shipping components.
and they’re doing things with ai that were never possible in figma.
one example: we had an animation that would have taken weeks to spec in figma, another week for a dev to build, another round of back and forth to get the timing right. with ai, a designer described exactly how they wanted it to animate through voice prompting. built it. shipped it. hours, not weeks.
that’s the edge. ai lets you do things you couldn’t do in figma. and figma lets you do things ai can’t do yet. the skill is knowing which tool for which moment and pushing both to their limit.
now, the messy part.
the code isn’t always clean. we’re dealing with that right now. building the right skills files for different kinds of projects. setting up different code review standards for ai generated code versus human written code. it’s a real problem and anyone telling you vibe coding “just works” is selling you something.
but here’s what i know: the designer who can go from concept to working code to user feedback in an afternoon? who ships 10 experiments while someone else is still in figma debating typefaces? that person is operating on a different plane.
on the equation: velocity goes from 1 experiment per sprint to 10.
time from idea to user feedback drops from weeks to hours. and because you’re testing more, your Δmetric improves. one designer at a D2C brand went from 10 hero banner variations per week to 250+. conversion lift? 23%. same design sense. 25x output.
figma skills still matter. but the designer who can do both? the person who moves between design tools and code depending on what the problem needs? that’s the role companies are hiring for. they just don’t have a name for it yet.
design ✅
product
engineering
marketing
generalists
HR
sales
product
your ceo just said “we need ai in the product.” cool. where? which feature? what metric does it move? that’s your job now. here’s the equation that answers it.
impact = adoption × retention × revenue per user × time to ship
break it down.
↳ adoption =
onboarding completion × time to first value × aha moment hit rate
↳ retention =
day 7/30/90 cohort curves × churn rate (inverse)
↳ revenue per user =
ARPU × expansion rate × upsell conversion
↳ time to ship =
spec to launch days × scope accuracy × dependency resolution speed
you own adoption and retention most directly through prioritisation, spec quality, and experiment design. revenue per user is shared with growth/monetisation. time to ship is shared with engineering.
now look at this equation and ask: where will ai create the most impact on THESE levers?
1/ context engineering
your ai feature gives a wrong answer. you check the model. model’s fine. you check the prompt. prompt’s fine. turns out the system was feeding it three year old product docs and ignoring the user’s last 10 interactions. that’s a context engineering failure.
context engineering is about designing what information gets fed to a model and in what structure. what goes into the system prompt. what gets retrieved from a database. what user history matters. what gets discarded.
it sounds simple. it’s the hardest part of building good ai products. bad context in, garbage out. no amount of model quality fixes bad context design.
on your equation, this lever hits accuracy of ai features, which directly affects adoption (aha moment hit rate) and retention (does the feature actually help?).
2/ multi agent teams
single agents were 2025. one agent, one task, one tool. 2026 is about orchestrating teams of agents that work together.
a researcher agent gathers data. a writer agent drafts the output. a reviewer agent checks for quality. a coordinator agent manages the whole thing, decides who does what, handles failures, routes exceptions. this mirrors how human teams work, except it runs at machine speed.
the product skill here is designing these multi agent architectures. which agents need to exist? how do they communicate? what happens when agent 3 disagrees with agent 1? how do you give humans the right level of oversight without bottlenecking the whole system?
on the equation: multi agent teams hit time to ship. things that required a team of 5 humans coordinating across sprints can now be orchestrated as agent workflows that run in minutes. spec to launch days drops. dependency resolution speed goes up. gartner reported a 1,445% surge in multi agent system inquiries from early 2024 to mid 2025. that’s not hype. that’s companies seeing the time to ship lever move.
3/ evals
how do you know if your ai feature is actually good?
not “users seem to like it.” not “it feels better.” actual measurement. and this is harder than it sounds because ai outputs are stochastic. you run the same prompt twice, you get different results. traditional QA doesn’t work here.
evals is the skill of building systematic ways to measure ai quality. accuracy rates. hallucination frequency. task completion rates. regression detection. and increasingly, using llms themselves as judges to evaluate other llms at scale.
the best eval practitioners do something most teams skip entirely.
they start with error analysis. look at where the system is failing, build a taxonomy of failure modes, then design evals that catch those specific failures before they reach users. it’s the difference between “ship and pray” and “ship and know.”
on the equation: evals protect retention. a feature that hallucinates erodes trust. trust erosion shows up in your day 7/30/90 curves. the pm who can build a proper eval system is protecting the retention lever from degradation. this is the #1 course on maven right now. that should tell you something about demand.
design ✅
product ✅
engineering
marketing
generalists
HR
sales
engineering
two years ago, you shipped features. now you ship features that talk to models that talk to tools that talk to databases that sometimes hallucinate. welcome to 2026. here’s your equation.
impact = ship velocity × system reliability × cost efficiency
break it down.
↳ ship velocity =
deployment frequency × lead time for changes × PR throughput
↳ system reliability =
uptime × p95 latency × error rate (inverse) × eval pass rate
↳ cost efficiency =
output quality ÷ inference cost × infra cost per request
you own all of it. the question is which lever is the bottleneck right now.
1/ agentic infrastructure & tool use
agents need infrastructure that didn’t exist two years ago.
tool calling frameworks. MCP servers that let agents interact with external services. agent runtimes that manage state across multi step workflows. orchestration layers that decide which agent handles which task. error recovery when an agent takes a wrong turn three steps into a chain.
but there’s another side to this that nobody’s talking about.
as ai generates more code, PRs are increasing at 10x speed. your engineering org suddenly has hundreds, potentially thousands of pull requests being generated daily. someone needs to build the infra to review that volume. automated code review systems that catch bugs, security issues, and architectural violations in ai generated code. deployment pipelines that can handle the throughput. testing infrastructure that scales with the output.
on the equation: this is the ship velocity lever. deployment frequency and PR throughput go up. but only if the infra can handle it. without it, your 10x PR volume becomes a 10x bottleneck at code review.
2/ context engineering
from the engineering side: designing and implementing the context window. what gets stuffed in, in what order, with what priority. token budgets. dynamic context selection based on the query. caching strategies for frequently used context.
when the context window is 200k tokens, the naive approach is “just shove everything in.” the skilled approach is knowing that a well curated 8k context outperforms a sloppy 200k one. on the equation: this hits both system reliability (better outputs, fewer errors) and cost efficiency (fewer tokens, lower cost per request). more isn’t better. better is better.
3/ model customization & cost optimization
the default move is calling the biggest model for everything. that works until your inference bill hits 6 figures and your cfo starts asking questions.
fine tuning on your data. distilling a large model into a smaller faster one. quantization. lora adapters. but also: multi model strategies. running open source models for routine tasks, proprietary ones for complex reasoning. routing queries to the right model based on complexity. mixing deepseek for simple classification with claude for nuanced generation.
this matters at high scale.
if you’re at a startup doing 1,000 api calls a day, just use the big model. if you’re at a company doing 10 million calls a day, this is the cost efficiency lever. 80% cost reduction on a 7 figure inference bill is a 7 figure saving. simple math.
4/ evals
your team ships a new ai feature. outputs look good on 100 test cases. you push to production. three days later, support tickets spike. turns out the model started hallucinating edge cases nobody tested for.
evals is the skill that prevents this.
you build eval pipelines that run on every deployment. not after. on. you use llm as judge systems where one model evaluates another’s output against criteria you define. you build failure taxonomies so you know exactly what kind of errors your system makes and how often. you set up automated eval gates in your ci/cd so bad outputs can’t ship.
on the equation: evals is the system reliability lever. eval pass rate goes up. error rate goes down. the engineer who can build a proper eval pipeline is, and i quote two hiring managers, “an instant hire.”
design ✅
product ✅
engineering ✅
marketing
generalists
HR
sales
marketing
your competitor just launched 500 ad variations last week. your team did 12. same budget. same audience. they’re winning and you can’t figure out why. here’s why. their equation looks different from yours now.
impact = reach × conversion × frequency × brand equity
break it down.
↳ reach =
channel volume × impression share × new channel penetration
↳ conversion =
creative quality × offer relevance × landing page effectiveness
↳ frequency =
content velocity × publishing consistency × repurpose rate
↳ brand equity =
share of voice × sentiment × recall
you own all of these. but frequency and conversion are the levers where ai creates the biggest gap right now. the team producing 50 pieces of content a month is simply running more experiments than the team producing 10.
1/ ai content creation
not “use chatgpt to write blog posts.” that’s how you get content that sounds like everyone else’s content.
this is about building end to end content systems with ai.
short form video: feed your top performing hooks into a model. generate 100 variations. score them against performance data. then use ai to automate storyboarding, base cuts, audio cleaning, and colour grading. what used to take 4 days per short now takes hours.
long form: build brand voice models that maintain consistency across 50 pieces a month instead of 4.
visual content: generate hundreds of ad creative variations and test them before committing to production. google reported that advertisers generated nearly 70 million creative assets using gemini in Q4 2025 alone. that’s a 3x year over year increase. this isn’t experimental anymore.
on the equation: this is the frequency lever. same team, 5x output. cost per piece goes down. experiments per month go up. and every extra experiment is a data point that makes the next piece better.
2/ ai-led performance marketing
traditional performance marketing: you set bids, pick audiences, adjust budgets weekly. maybe daily if you’re good.
ai led performance marketing: the system reads campaign data in real time, shifts budget between channels mid day, pauses underperformers, and scales winners. your job becomes designing the strategy and constraints. the machine executes.
platforms like google’s performance max and meta’s advantage+ have made this the default. the ai handles bidding, targeting, creative assembly, and placement. the human marketer’s job is now strategic direction, not daily optimization.
on the equation: this is the conversion lever. CPA goes down. ROAS goes up. and you’re managing 10x more campaigns with the same headcount because the machine does the optimisation you used to do manually.
3/ AEO (ask engine optimization)
google is no longer the only front door. 800 million people a week now use chatgpt alone to answer questions, compare options, and plan next steps. they’re asking “best crm for 50 person teams” or “running shoes for flat feet under 10k.” and these models pull answers from structured data, not page rank.
your brand must show up when an ai answers a question.
that means structured data, schema markup, entity relationships, faq optimization. traditional SEO got you ranked. AEO gets you cited.
on the equation: this is the reach lever. a brand new channel. almost zero competition right now. the marketers who figure this out first own a channel their competitors don’t even know exists yet. and unlike SEO, where results take months, AEO results can show up in weeks because the field is so new.
design ✅
product ✅
engineering ✅
marketing ✅
generalists
HR
sales
generalists / operations
you’re the person who makes the company actually run. nobody writes job descriptions for what you do because half of it didn’t exist last quarter. here’s your equation.
impact = process efficiency × error rate (inverse) × cost per unit × scalability
break it down.
↳ process efficiency =
automation coverage × handoff friction (inverse) × SOP clarity × exception handling speed
↳ scalability =
current throughput ÷ max throughput at current headcount
you own process efficiency and error rate fully. cost per unit is shared with finance. scalability is shared with leadership. the input levers you can move with ai? automation coverage and exception handling speed. those are the two that change your career.
1/ agentic workflows
a customer complaint comes in. the agent reads it, classifies the issue, checks the customer’s history, drafts a response, escalates if needed, and logs everything. no human in the loop for 80% of cases.
that’s an agentic workflow. not just automation (if this, then that) but ai making decisions along the way. deciding what to escalate. deciding which template fits. deciding when a human needs to step in.
the skill is designing these workflows so they handle edge cases gracefully. because the 20% of cases that need a human? those are the ones that determine whether your customers love you or leave.
back to the equation: one good agentic workflow moves automation coverage from 30% to 80%. that changes your cost per unit, your throughput, your scalability. one workflow. three levers.
2/ ai adoption strategy
this one’s for you if you’re at a large company.
most enterprises are terrible at rolling out ai internally. they buy tools, send a slack message saying “hey team, we now have copilot,” and wonder why adoption is 12% after three months. 500 licenses purchased. 40 people actually using it.
ai adoption strategy is about knowing which teams benefit first, designing workflows that embed ai into existing processes (not alongside them), measuring actual productivity gains, and handling the very real resistance from people who think ai is coming for their jobs.
at a 50 person startup, the founder just tells everyone to use it, and they do. at a 5,000 person enterprise, you need a strategy. change management. pilot teams. success metrics. executive buy in.
the lever here? the productivity delta between teams using ai and teams that aren’t. that delta is where the roi case lives. and that delta is what gets you promoted.
design ✅
product ✅
engineering ✅
marketing ✅
generalists ✅
HR
sales
HR / people function
your annual review process takes 6 weeks. involves 14 spreadsheets. and tells managers what they already knew. ai is about to make all of that look prehistoric. here’s the equation that matters.
impact = talent density × time to productivity × retention of top performers
break it down.
↳ talent density =
offer acceptance rate × quality of hire score × sourcing channel effectiveness
↳ time to productivity =
onboarding completion speed × tool access time × buddy/mentor assignment
↳ retention of top performers =
engagement score × internal mobility rate × comp competitiveness × manager quality
you own talent density and time to productivity almost entirely. retention is shared with managers across the org. the lever that’s most stuck at most companies right now? sourcing channel effectiveness. everyone is fishing in the same linkedin pool.
1/ ai powered talent intelligence
forget the job board. forget the linkedin search with 15 filters.
think about every person you’ve ever met professionally. every whatsapp contact. every linkedin connection. every person who attended that conference. every colleague from your last three companies. that’s your real talent network. and right now, it’s sitting in 6 different apps doing nothing.
talent intelligence is about building a CRM on top of all of it. ai that scans your entire professional network, tracks career moves, identifies when someone might be open to a conversation, and surfaces the right person for the right role before you even post the job.
for talent acquisition, the skill is building systems that go beyond active candidates. passive talent identification. market mapping. competitor hiring pattern analysis. compensation benchmarking.
on the equation: this is the sourcing channel effectiveness lever. the recruiter who builds these systems fills roles 40% faster and with better quality hires because they’re fishing in a bigger, smarter pond. that moves talent density directly.
2/ employee experience automation
onboarding is where most new hires decide if they made the right choice. and at most companies, onboarding is a mess. 47 emails. 12 forms. a buddy who forgot they were assigned. three weeks before you get access to the tools you need.
ai can fix the entire flow.
automated onboarding sequences that adapt to the role. instant answers to benefits questions. leave request processing without waiting for hr to check a spreadsheet. internal transfer workflows that don’t require 6 approvals.
the skill is building these automations without making employees feel like they’re talking to a wall. the best employee experience automations feel invisible. the worst ones make people rage quit. knowing the difference is the skill.
on the equation: this is the time to productivity lever. onboarding completion speed goes up. tool access time drops. new hires hit full productivity in 2 weeks instead of 6. and the hr team handles 3x the headcount without adding staff.
design ✅
product ✅
engineering ✅
marketing ✅
generalists ✅
HR ✅
sales
sales
your top rep closes 30% of qualified leads. your average rep closes 12%. the gap used to be talent. now it’s systems. here’s the equation.
impact = pipeline value × close rate × deal velocity
break it down.
↳ pipeline value =
number of qualified leads × average deal size
↳ close rate =
proposal win rate × objection resolution effectiveness
↳ deal velocity =
first contact to signed days × internal approval speed
you own pipeline value and close rate almost entirely. deal velocity is shared with legal and internal ops. the lever that separates your top rep from your average one? they have better pipeline (higher quality leads) and better preparation (they never walk into a call without context). ai can give that to every rep.
1/ ai powered lead generation
traditional lead gen: buy a list, blast emails, pray. ai powered lead gen: build systems that identify companies showing buying signals (new funding, leadership changes, tech stack shifts, job postings that indicate a problem you solve).
on the equation: this is the pipeline value lever. the sales person who can set up and tune these systems fills their pipeline with warm leads while everyone else is cold calling from a stale database. number of qualified leads goes up. average deal size goes up because you’re targeting the right companies.
2/ hyper personalized communication
not “hi {first_name}, i noticed you work at {company}.”
real personalization means a system that reads the prospect’s recent linkedin posts, their company’s earnings call, their industry news, and generates outreach that references something specific and relevant. at scale. for hundreds of prospects.
the skill isn’t writing one great email. it’s building a system that writes hundreds of great emails that each feel like they were written just for that person.
on the equation: this is the close rate lever. response rates go from 2% to 15%. because the email actually said something the prospect cared about. more responses = more conversations = more closes. the math is direct.
3/ ai assisted sales calls
real time call analysis. the ai listens to your sales call and surfaces relevant case studies, pricing comparisons, objection handling scripts. mid conversation.
post call: automatic summary, action items, crm updates, sentiment analysis.
on the equation: this hits both close rate (objection resolution effectiveness goes up because you always have the right data) and deal velocity (first contact to signed days drops because nothing falls through the cracks). and your crm is actually accurate for the first time ever because the ai fills it in, not the rep.
design ✅
product ✅
engineering ✅
marketing ✅
generalists ✅
HR ✅
sales ✅
so here’s the question.
you just scrolled through seven functions and roughly 25 skills. some you’ve heard of. some are new. some probably made you uncomfortable because you realized there’s a gap you’ve been filling with bookmarks instead of building.
i don’t know which skill is right for you. i don’t know your function, your level, your company’s problems.
but i know the difference between the people who read articles like this and the people who build after reading them. they’re two completely different populations.
pick your function. pick 2 skills. not 7. not 10. two.
go deep for 60 days. build something you can show someone. something that solves a problem a company would pay for.




