•1 min read•from Data Science
Should coding interviews just become vibe coding interviews at this point?
Our take
In today’s rapidly evolving tech landscape, the relevance of traditional coding interviews is increasingly questioned. Many candidates are now harnessing AI to assist with coding tasks, prompting a reevaluation of whether obscure algorithms and complex data structures are still the best measures of a developer’s capabilities. Instead of testing memory recall, should interviews focus on practical experience with machine learning concepts or collaborative coding exercises that reflect real-world scenarios?
I don’t really get why interviews are still so focused on obscure data structures, algorithms, or complex SQL and pandas problems. At this point, most of us are using AI in some capacity to write or assist with code anyway.
Why does it still matter if I can invert a binary tree in 10 minutes from memory? Wouldn’t it make more sense to talk about actual experience, ML concepts, or even do a coding exercise where AI is allowed, like how people actually work on the job?
Why do you think companies are still stuck using these older methods to evaluate candidates?
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Related Articles
- Interview Experience: Big teams look for potential, smaller teams look for how fast you can instantly come add valueMy interview experience has been a massively varied at this point, but what I've noticed is the massive difference between big companies like FAANG and smaller orgs like DS in banking or random small companies At FAANG it's kind of like an IQ + knowledge test (what google calls Role related knowledge) and smaller companies do assessments for very specific types of modeling or use cases, like build a model being evaluated on a certain metric. So at FAANG I was asked questions like "why is the formula for s.d. different for pop. vs sample', or 'what happens to the bias/variance in x,y,z situation' mean while at companies that are smaller and pay less they sent me a random 30-60 minute assessment and asked me to directly clean data and code up a model with sklearn/pandas. Is this what everyone else has experienced? It does seem like at smaller or traditional companies test if you will be a good code monkey while others look for actual understanding. submitted by /u/LeaguePrototype [link] [comments]
- How do you keep up without burnout?DS sometimes feels like there's infinite amount of things to learn. Most recent trend has been AI engineering And it's not like AI came in so you can deprioritize something else, but instead it just gets added to the heap. So you already had this massive amount of content to know from stats & product, trad. ML, deployment, ops, engineering, cloud, etc. and then you add the new thing on and the new thing. And when you read the job descriptions they literally list of all of this. I just had an interview for a random gaming company that wanted cloud, snowflake, stats, ML, ops, and AI experience in 1 person and it was for like 3-5 years of experience. And I wish that this was a one off thing but it seems to get more common. It actually feels like FAANG is easier to interview for because they silo people and not expect you to know and do everything What is your strategy for learning these skills without getting exhausted, or do you feel companies expectations are overflated? Is this a by product of AI where people are expected to do a lot more with less? submitted by /u/LeaguePrototype [link] [comments]
- Feels like DS hiring logic is starting to change because of AIBeen noticing new DS hiring products like Litmetrics.ai lately, which seems much more focused on real datasets and messy business cases than the classic coding-test format. A lot of DS work today are more like to be end-to-end analytical judgment with AI in the loop. That feels like a different hiring target than the classic CodeSignal / HackerRank screening - pretty sure most DS have used them in interviews. Curious what other people think. Is DS hiring actually changing on the assessment layer - to whether candidates can work through an real business problem, or putting AI language on top of the classic coding test & screening process is still the best way? submitted by /u/Alarming-Wish207 [link] [comments]
- Onsite interview anxiety: what to say when you don’t know an answer?I have an onsite interview coming up, not virtual, and it’s been a while since I’ve interviewed in person. The recruiter said the coding portion could cover anything from data structures and algorithms to SQL, pandas, or even live model building, so I’m expecting there will be things I don’t know. What’s really stressing me out is the idea of being in front of someone and blanking on a question. That feeling of just sitting there stuck feels embarrassing. In that situation, what’s the best way to handle it? Is it better to say something like “Sorry, I can’t figure this out right now” or “I haven’t covered this topic before” and ask to move on? submitted by /u/Fig_Towel_379 [link] [comments]
Tagged with
#rows.com#big data management in spreadsheets#generative AI for data analysis#conversational data analysis#Excel alternatives for data analysis#real-time data collaboration#financial modeling with spreadsheets#intelligent data visualization#no-code spreadsheet solutions#data visualization tools#enterprise data management#big data performance#data analysis tools#data cleaning solutions#coding interviews#AI#data structures#algorithms#SQL#pandas