Overview: We are seeking a talented, self-motivated Data Engineer with expertise in SQL, data warehousing, data lakes, automation, and analytic tools.
The ideal candidate will be responsible for designing, implementing, maintaining, and optimizing scalable data pipelines that extract, transform, and load (ETL) data into our data warehouse, supporting international logistics operations.
This role requires solid fundamentals in application development, scripting, and general automation skills to ensure timely, accurate, and high-quality information delivery.
The role requires the individual to possess a strong ability to identify data issues and find novel ways of correcting that information through data transformation and/or the identification and ingestion of new data sources.
The candidate will collaborate closely with business leads and end users as well as the Information Services (IS)/Information Technology (IT) team. Key Responsibilities: Data Engineering & ETL Pipeline Development: Build and optimize end-to-end pipelines using SQL Server, Azure Data Factory, Databricks, and other modern tools to ingest, process/transform, and load datasets from external sources of solicitations, awards, our Enterprise Resource Planning (ERP) system (Microsoft Dynamics 365 Business Central), and other systems into a data warehouse and data lakes. Automate data workflows and updates across the ERP and data warehouse using Power Automate and Python. Design and maintain efficient data pipelines for large-scale procurement data, pricing analysis, and forecasting reports using tools such as MS SQL Server, Power BI, Power Query, and Azure Data Factory. Manage the integration between Microsoft Dynamics 365 Business Central and the data warehouse to ensure accurate and efficient data flows and analytics. Implement data validation and quality checks to ensure data quality metrics (such as accuracy, completeness, consistency, uniqueness, timeliness, and validity). Data Modeling and Architecture Assist in maintaining and extending the AMS Group Supply Chain Solutions (SCS) data warehouse containing 34 million parts, 100+ million award records, and 3.4 million suppliers. Assist in the design and maintenance of database structures to support analytical and operational use cases. Evaluate and implement data storage solutions, which may include relational databases, "NoSQL" databases, data lakes, and cloud storage. Automation, Analytics, Programming, and Integration: Lead automation efforts with Python, Databricks, and Power Automate to improve data engineering efficiency. Develop and maintain Power Apps solutions to allow end users to update and manage data efficiently. Support analytics and business reporting needs by building interactive Power BI dashboards and forecasting reports. Build and maintain integrations with internal and external data sources and APIs. Design and implement RESTful APIs and web services for data access and consumption. Cloud and Modern Data Solutions: Help to design and build our new data technology stack in Microsoft Azure. Support migration efforts to modern cloud solutions on the Microsoft Azure technology stack for data warehouse and data lakes. Implement and manage data lakes and structured data warehouse architectures to meet evolving business needs. Collaboration and Documentation: Partner with the CIO, IS Lead, and IS Project Manager to define project requirements and ensure successful delivery. Document all processes, technical designs, workflows, coding changes, requirements, and best practices to maintain up-to-date technical documentation. Provide end-user technical guidance and support, troubleshoot issues, and resolve data discrepancies promptly. Support ongoing business requests for ad hoc data sets needed to win clients and grow the business. Support ongoing business requests by helping to build self-service analytics in Power BI and/or Databricks.
Requirements
Essential Duties and Responsibilities: Manage and resolve data engineering tickets with minimal oversight, maintaining a 90% or better customer satisfaction rating.Support and troubleshoot daily data downloads to prevent any data gaps or processing errors.Lead regression testing efforts, identify bugs, and validate data to maintain high data accuracy.Ensure data model normalization and hygiene to maintain a clean and efficient data environment.Extend and maintain SharePoint Online solutions to streamline internal data processes.The ability to leverage our internal ITSM tool to effectively manage and track IS tickets. Key Skills & Qualifications: Advanced proficiency in SQL Server and experience with data warehouse architecture.Bachelor's degree required, master's degree preferred.3-5 years of relevant experience; experience in defense supply chain is a major bonus.Hands-on experience with Databricks, Azure Data Factory, and data lakes.Strong automation skills using Python and Power Automate.Proficiency with Microsoft Power BI, Power Apps, and Excel for reporting and analysis.Proficiency with ETL tools commonly used in data engineering.Familiarity with cloud platforms and services.Experience with ERP systems like Microsoft Dynamics 365 Business Central.Ability to manage multiple tasks effectively and deliver solutions with minimal oversight.Ability to effectively communicate and collaborate in a team-oriented environment.Strong collaboration and communication skills to engage with stakeholders across departments.Ability to adapt to evolving technologies and business requirements.Excellent problem-solving skills and attention to detail. Measures of Success: Maintain accurate and timely data processing with minimal errors or downtime.Achieve and maintain a 90%+ satisfaction rating on all support tickets.Deliver well-documented and optimized ETL pipelines to support the business.Contribute to the successful migration to Azure-based data management solutions.Maintain a detailed self-assessment of career and training goals to support continuous professional development.