Does automating the boring stuff in DS actually make you worse at your job long-term
Our take
Been thinking about this a lot lately after reading a few posts here about people noticing their skills slipping after leaning too hard on AI tools. There's a real tension between using automation to move faster and actually staying sharp enough to catch when something goes wrong. Like, automated data cleaning and dashboarding is genuinely useful, but if you're never doing, that work yourself anymore, you lose the instinct for spotting weird distributions or dodgy groupbys. There was a piece from MIT SMR recently that made a decent point that augmentation tends to win over straight replacement in the, long run, partly because the humans who stay engaged are the ones who can actually intervene when the model quietly does something dumb. And with agentic AI workflows becoming more of a baseline expectation in 2026, that intervention skill matters even, more since these pipelines are longer, more autonomous, and way harder to audit when something quietly goes sideways. The part that gets me is the deskilling risk nobody really talks about honestly. It's easy to frame everything as augmentation when really the junior work just disappears and, the oversight expectation quietly shifts to people who are also spending less time in the weeds. The ethical question isn't just about job numbers, it's about whether the people left are, actually equipped to catch failures in automated pipelines or whether we're just hoping they are. Curious if others have noticed their own instincts getting duller after relying on AI tools for, a while, or whether you've found ways to keep that hands-on feel even in mostly automated workflows.
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Related Articles
- Is the ds/ml slowly being morphed into an AI engineer? [D]Agents are amazing. Harnesses are cool. But the fundamental role of a data scientist is not to use a generalist model in an existing workflow; it's a completely different field. AI engineering is the body of the vehicle, whereas the actual brain/engine behind it is the data scientist's playground. I feel like I am not alone in this realisation that my role somehow got silently morphed into that of an AI engineer, with the engine's development becoming a complete afterthought. Based on industry requirements and ongoing research, most of the work has quietly shifted from building the engine to refining the body around it. Economically, this makes sense, as working with LLMs or other Deep Learning models is a capital-intensive task that not everyone can afford, but the fact that very little of a role's identity is preserved is concerning. Most of the time, when I speak to data scientists, the core reply I get is that they are fine-tuning models to preserve their "muscles". But fine-tuning is a very small part of a data scientist's role; heck, after a point, it's not even the most important part. Fine-tuning is a tool. Understanding, I believe, should be the fundamental block of the role. Realising that there are things other than "transformers" and finding where they fit into the picture. And don't even get me started on the lack of understanding of how important the data is for their systems. A data scientist's primary role is not the model itself. It's about developing the model, the data quality at hand, the appropriate problem framing, efficiency concerns, architectural literacy, evaluation design, and error analysis. Amid the AI hype, many have overlooked that much of their role is static and not considered important. AI engineering is an amazing field. The folks who love doing amazing things with the models always inspire me. But somehow, the same attention and respect are no longer paid to the foundational, scientific side of data and modeling in the current industry. I realise it's not always black and white, but it's kind of interesting how the grey is slowly becoming darker by the day. Do you feel the same way? Or is it just my own internal crisis bells ringing unnecessarily? For those of you who have recognized this shift, how are you handling your careers? Are you leaning into the engineering/systems side and abandoning traditional model development? Or have you found niche roles/companies that still value the fundamental data scientist role (data quality, architectural literacy, statistical rigor)? I'd love to hear how you are adapting submitted by /u/The-Silvervein [link] [comments]
- What has been people's experience with "full-stack" data roles?I started my career being a jack of all trades - hired as a data analyst but I had to extract, clean, and then analyze data and even sometimes train models for simple predictions and categorization. That actually led me to become a data engineer but I've spent most of my career working closely with data scientists and trying my best to make their jobs easier by taking away all the preprocessing tasks away from them so they can focus on training, inference MLops, etc. While I claim to have helped them, to be honest DE teams often become a bottleneck and an obstacle. Everything from not being able to provide the data needed to train on time, or how we processed the data was wrong and led to bad performance, or they went live with a model blindly because we couldn't get them the observation data on time for them to analyze accuracy. I'm wondering how much of the data engineering tasks can be automated/vibed away by data scientists. My guess is that in larger companies this won't be the case but I think startups and SMBs want to move fast so they'd rather have data scientists own the whole pipeline. What has been other's experience with this and where is it heading? submitted by /u/uncertainschrodinger [link] [comments]
- How do you keep up without burnout?DS sometimes feels like there's infinite amount of things to learn. Most recent trend has been AI engineering And it's not like AI came in so you can deprioritize something else, but instead it just gets added to the heap. So you already had this massive amount of content to know from stats & product, trad. ML, deployment, ops, engineering, cloud, etc. and then you add the new thing on and the new thing. And when you read the job descriptions they literally list of all of this. I just had an interview for a random gaming company that wanted cloud, snowflake, stats, ML, ops, and AI experience in 1 person and it was for like 3-5 years of experience. And I wish that this was a one off thing but it seems to get more common. It actually feels like FAANG is easier to interview for because they silo people and not expect you to know and do everything What is your strategy for learning these skills without getting exhausted, or do you feel companies expectations are overflated? Is this a by product of AI where people are expected to do a lot more with less? submitted by /u/LeaguePrototype [link] [comments]
- How are you helping your company understanding the limitations of AI derived data?From my perspective, one of the biggest challenges of data science as a field right now is the tension between: A) AI can give "pretty good" answers extremely fast and democratizes it B) Those answers are often decent, but could be nontrivially "wrong" C) That "wrongness" is often not exposed for months or years That is, AI fully democratizes "getting a number" to our biz stakeholders across just about any business problem. A lot of times that number is off some but still pretty good and useful, but we all know sometimes it's catastrophically wrong. However, even in those worse cases though, there's a pressure to move fast, and so the consequences of that wrong number are not eaten or discovered until a good while later (when you find out a prediction was wrong retro-actively, when flaws in a matching process are discovered, when it turns out to have been the wrong "data-informed" decision, etc etc). This is exacerbated by seemingly a lot of biz users either not understanding, or simply not caring, that "number could be wrong". That's not helped by perverse incentive structures either. So my questions is - what, if anything, are you doing at your company to help stakeholders understand that? Or more importantly, to help build a culture that takes the scenario more responsibly? (yes yes, there's maybe not much we can do about it. CEO whims and all that. But interested in what steps people are taking pro-actively) submitted by /u/jshkk [link] [comments]