•1 min read•from Data Science
Which fields are most and least likely to be impacted by AI?
Our take
The impact of AI on various fields within data science and machine learning is a topic of growing interest. While AI is poised to change the way we handle coding tasks, the complexities of data science—such as understanding business context and achieving specific goals—remain challenging to automate. This raises questions about which areas, like forecasting, optimization, and anomaly detection, are most susceptible to automation, and which may retain a human touch. Engaging in this discussion can illuminate the future landscape of these crucial fields.
Certainly AI will affect how much coding we do by hand. The actual data science part is harder to automate, because every problem requires business context and an understanding of how to achieve your goal with the data you have.
That being said, as someone who has concentrated heavily in one niche (forecasting), I am curious which fields in DS/ML people think are most or least likely to be automated substantially by AI. Forecasting, Optimization, A/B testing, Causal Inference, Vision, Anomaly Detection, etc?
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Related Articles
- How do you think AI will impact data science jobs?Would love to hear everyone’s thoughts? I’ve been seeing some pretty impressive new tools that I think have serious implications for data science jobs. submitted by /u/a_girl_with_a_dream [link] [comments]
- Where do you see HR/People Analytics evolving over the next 5 years?Curious how practitioners see the field shifting, particularly around: AI integration Predictive workforce modeling Skills-based org design Ethical boundaries Data ownership changes HR decision automation What capabilities do you think will define leading functions going forward? submitted by /u/Proof_Wrap_2150 [link] [comments]
- Does automating the boring stuff in DS actually make you worse at your job long-termBeen thinking about this a lot lately after reading a few posts here about people noticing their skills slipping after leaning too hard on AI tools. There's a real tension between using automation to move faster and actually staying sharp enough to catch when something goes wrong. Like, automated data cleaning and dashboarding is genuinely useful, but if you're never doing, that work yourself anymore, you lose the instinct for spotting weird distributions or dodgy groupbys. There was a piece from MIT SMR recently that made a decent point that augmentation tends to win over straight replacement in the, long run, partly because the humans who stay engaged are the ones who can actually intervene when the model quietly does something dumb. And with agentic AI workflows becoming more of a baseline expectation in 2026, that intervention skill matters even, more since these pipelines are longer, more autonomous, and way harder to audit when something quietly goes sideways. The part that gets me is the deskilling risk nobody really talks about honestly. It's easy to frame everything as augmentation when really the junior work just disappears and, the oversight expectation quietly shifts to people who are also spending less time in the weeds. The ethical question isn't just about job numbers, it's about whether the people left are, actually equipped to catch failures in automated pipelines or whether we're just hoping they are. Curious if others have noticed their own instincts getting duller after relying on AI tools for, a while, or whether you've found ways to keep that hands-on feel even in mostly automated workflows. submitted by /u/taisferour [link] [comments]
- AI isn’t making data science interviews easier.I sit in hiring loops for data science/analytics roles, and I see a lot of discussion lately about AI “making interviews obsolete” or “making prep pointless.” From the interviewer side, that’s not what’s happening. There’s a lot of posts about how you can easily generate a SQL query or even a full analysis plan using AI, but it only means we make interviews harder and more intentional, i.e. focusing more on how you think rather than whether you can come up with the correct/perfect answers. Some concrete shifts I’ve seen mainly include SQL interviews getting a lot of follow-ups, like assumptions about the data or how you’d explain query limitations to a PM/the rest of the team. For modeling questions, the focus is more on judgment. So don’t just practice answering which model you’d use, but also think about how to communicate constraints, failure modes, trade-offs, etc. Essentially, don’t just rely on AI to generate answers. You still have to do the explaining and thinking yourself, and that requires deeper practice. I’m curious though how data science/analytics candidates are experiencing this. Has anything changed with your interview experience in light of AI? Have you adapted your interview prep to accommodate this shift (if any)? submitted by /u/KitchenTaste7229 [link] [comments]
Tagged with
#automated anomaly detection#big data management in spreadsheets#generative AI for data analysis#conversational data analysis#Excel alternatives for data analysis#real-time data collaboration#intelligent data visualization#data visualization tools#enterprise data management#big data performance#data analysis tools#data cleaning solutions#rows.com#business intelligence tools#financial modeling with spreadsheets#AI#forecasting#data science#machine learning#ML