•1 min read•from Data Science
Are teams still using Pytorch/Tensorflow, or is most ML work just calling LLM endpoints and prompt engineering now?
Our take
The landscape of machine learning (ML) is evolving rapidly, with a noticeable shift towards large language models (LLMs) and prompt engineering. As job seekers like you explore opportunities, it's clear that many roles now center on leveraging LLMs through APIs, such as those from OpenAI and Anthropic. While traditional frameworks like PyTorch and TensorFlow are still relevant, especially for certain classical ML tasks, the surge in LLM adoption reflects a broader trend towards more accessible and powerful AI solutions.
I've been looking for a new job lately (brutal market, btw), and a lot of the ML/AI engineering work now seems pretty LLM-dominated.
I still see a few jobs that seem to be doing more "classical", pre-ChatGPT era type of work with Pytorth or Tensorflow, but it seems that a lot of the work now is working with LLMs, doing RAG, prompt engineering, etc. with Langchain or what have you, and calling Anthropic or OpenAI model endpoints.
Is this an accurate take on the market? And if so, what happened to all the Pytorch/Tensorflow work? Why did it shift so heavily towards just using LLM providers in some package/endpoint?
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Tagged with
#financial modeling with spreadsheets#rows.com#natural language processing for spreadsheets#generative AI for data analysis#Excel alternatives for data analysis#LLMs#Pytorch#Tensorflow#prompt engineering#ML#AI#RAG#endpoint#Langchain#ChatGPT#Anthropic#OpenAI#classical ML#AI engineering#job market