We are looking for a skilled Machine Learning Engineer with a solid foundation in data engineering to develop and implement advanced machine learning models that enhance Allio's products and services. The ideal candidate will have experience in building data pipelines, handling large datasets, and creating scalable machine learning solutions, with proficiency in tools like Snowflake. Experience working with financial and/or economic time-series data is highly preferred.
Key ResponsibilitiesDesign and develop robust data pipelines and ETL processes to support machine learning models.Ensure data quality and integrity across various data sources.Work with data warehousing solutions, particularly Snowflake, to manage and optimize data storage.Collect, preprocess, and analyze financial and economic time-series data.Apply time-series modeling techniques to extract insights and forecast trends.Build and deploy machine learning models and algorithms tailored to business needs.Conduct experiments to test hypotheses and validate models.Algorithm Optimization: Optimize machine learning algorithms for performance and scalability.Implement techniques to improve model accuracy and reduce computational costs.Integration and Deployment: Collaborate with software engineers to integrate machine learning models into production systems.Ensure models are robust, scalable, and maintainable in real-world applications.Research and Innovation: Stay current with the latest developments in machine learning and data engineering.Explore new technologies and methodologies to enhance existing solutions.Qualifications:Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field.Proven experience as a Machine Learning Engineer or in a similar role.Strong programming skills in Python.Experience with machine learning frameworks such as TensorFlow, PyTorch, XGBoost, LightGBM, or scikit-learn.Proficiency in data engineering tools and techniques, including SQL, NoSQL databases, and ETL processes.Experience with data warehousing platforms, particularly Snowflake.Familiarity with big data technologies like Apache Spark or Hadoop.Strong understanding of statistical modeling, data mining, and data visualization techniques.Experience working with financial and/or economic time-series data.Experience with software development tools and version control systems (e.g., Git).Excellent problem-solving skills and attention to detail.Preferred Qualifications:Experience with cloud platforms like AWS, Google Cloud, or Azure.Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).Familiarity with data pipeline tools such as Apache Airflow or Luigi.Strong communication skills and the ability to work collaboratively in a team environment.
#J-18808-Ljbffr