The role will help supporting initiatives to promote Smart Digital Manufacturing. The primary responsibility of the candidate will be to support the organization’s broader Reliability Engineering initiatives through the application of machine learning and data-driven methods.
The work will involve analyzing data from facility systems (such as cooling systems and their components, including compressors and pumps) as well as manufacturing systems to generate actionable insights into equipment health and performance. By deploying data-driven and machine learning models in production environments, the candidate will help enable predictive maintenance and enhance the reliability and efficiency of production systems.
As the program evolves, the candidate may also contribute to advanced industrial AI applications, including Computer Vision solutions for Assisted Line Clearance and data-driven models supporting Digital Twin systems for monitoring, simulation, and optimization of manufacturing processes.
Key Responsibilities
- Support the machine learning model lifecycle, including data preparation, model development, validation, deployment, and continuous improvement.
- Collect, preprocess, and analyze data from sources such as compressors, pumps, and manufacturing systems.
- Understand underlying system and machine behavior to effectively interpret datasets such as vibration and frequency signals, force and position data, temperature readings, and images.
- Research and propose suitable machine learning and deep learning models relevant to the defined use cases.
- Develop, train, and validate models for condition-based monitoring, predictive maintenance, and anomaly detection.
- Develop production-level Python pipelines for training, validating, and deploying machine learning models in production manufacturing environments.
- Design and implement dashboards for monitoring data and model outputs using Grafana.
- Perform systematic model evaluation, validation, and monitoring to ensure robustness, generalizability, and long-term reliability.
- Monitor deployed machine learning models to ensure stable performance and detect data or model drift over time.
- Collaborate with technical leads, automation engineers, external vendors, and global data science teams to align solutions with production and operational needs.
Core Requirements
1. Education:
a. MSc in Computer Science, Machine Learning, Mechatronics, or related engineering fields, with at least 3 years of relevant experience, or
b. BSc in Computer Science, Machine Learning, Mechatronics, or related engineering fields, with atleast 5 years of relevant experience
2. Programming & Platforms:
a. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow, etc.),
b. Platforms: Databricks.
3. Machine/Deep Learning: Knowledge of anomaly detection methods, probabilistic models, and practical model deployment.
4. Systems Knowledge: Ability to interpret physical machine behavior through sensor data (e.g., pumps, compressors, assembly systems).
5. Software Engineering & DevOps
a. Strong experience in writing clean, production-level code.
b. Proficiency with Git for version control and collaborative development.
c. Familiarity with DevOps practices for deployment, monitoring, and scalability, including Docker and CI/CD workflows.
6. Ownership & Execution: Ability to independently design and implement end-to-end machine learning solutions.
Desired Skills (Nice to Have)
- Experience in predictive maintenance applications or similar industrial data use cases.
- Experience in deploying machine learning models in a regulated environment like GMP.
- Experience with Grafana for building dashboards and visualizing time-series data.
- Knowledge of computer vision techniques, including CNNs, Vision Transformers, and vision anomaly detection models such as PatchCore and PaDiM.
- Familiarity with AWS services such as SageMaker and S3.
Interested? Send your CV to Daria at d.finikova@panda-int.com or call +31202044502.