Science

By LuisWert

Are you struggling to hire quickly enough?

Data scientists and machine-learning engineers are not being hired fast enough by enterprise data science teams. You don’t have to wait for your team to be fully staffed before you can start making AI work. It is simply a matter of bringing automation together with better tooling, so even small data science teams can make a significant impact.

Must Read: managing technology within an organization

One of the main themes that I hear from chief data and analytics officers is difficulty in hiring and keeping data scientists and machine-learning engineers. Major data science projects will stop if a chief data scientist leaves, or if it is not possible to quickly hire enough ML engineers to manage each model in production.

This is a common problem. A Gartner survey in 2021 found that 64 percent of IT executives cited lack of skilled talent (nearly two-thirds) as their biggest obstacle to adopting new technologies such as AI and machine learning. Data scientists are 20% more difficult to hire than IT jobs, and take twice as long as average US corporate jobs. The demand for ML engineers is higher than ever, with ML engineers’ job openings growing at a rate of 30x more than IT services.

AI investments have been made by large corporations, which include the expansion of data teams and promises of increased automation, personalized customer experiences at scale, and more accurate forecasts to increase revenue. However, only 10% of AI investments have yielded significant ROI. This is despite the fact that AI has a lot of potential.

This is a crucial question for CDAOs. With only a few data scientists and perhaps even fewer ML experts, how can they bring value from AI/ML to the enterprise in the short term? In other words, can small data science teams drive outsized value, without having to wait months or even years for a fully-trained, well-staffed team?

See also  A Visual Guide to "Science Symbol" in a Laboratory

MLOps teams should not wait to fill these roles. They need to find a way for MLOps to support more ML models without increasing the data science staff. How do they do this? Here are some tips:

Never Miss: select the right nox control technology

Recognize strengths in existing team members

Each member of the team brings different skills and strengths to the team. Data scientists excel at turning data into models that solve business problems and help make business decisions. However, the skills and expertise required to create great models don’t translate into the skills necessary to put those models into the real world with production-ready codes and monitor and update them on an ongoing basis. ML Engineers combine tools and frameworks to ensure that data, pipelines and other infrastructure work together to produce ML models at scale.

Data scientists might be willing to give their models to MLOps for a production rollout. However, this may not be a cost-efficient process. Data scientists and MLOps engineers are not the same people and may have different ways of working. This can lead to time-consuming bottlenecks when one group attempts to explain a requirement (e.g. data preprocessing is required, and the other team attempts to fulfill it.

How can ML engineers and data scientists alert the data scientists if a model is acting out or becomes less accurate in production? It is possible to work together to identify the problem. Is it an error in production or a problem with the model? This can cause the same communication and coordination issues that were encountered during deployment. Data scientists also struggle to gain visibility into their models within production stack.

See also  What Everyone Hates About "Factorio Purple Science"

Also Read: history of wireless technologies

Avoid making the same mistakes as cloud adoption

IT infrastructure teams tried to create their own private clouds ten years ago. They took longer than expected, cost more to build and required more resources to maintain. Additionally, they had less security and scaling capabilities than public clouds. These enterprises spent significant time and resources on infrastructure instead of investing in core business capabilities.

Most popular: leaping into the 6th technology revolution

Many companies are following the same DIY approach to MLOps in many cases. Common methods for putting ML in production include custom solutions that are compiled from open-source tools such as Apache Spark.

These are often inefficient as measured by the time and inferences required. They also lack the ability to monitor and test the accuracy of models over time. These approaches are not scalable and repeatable enough to be applicable to many use cases across the enterprise.

Leave a Comment