Data Engineer and Integrations Lead
-
Tan Thuan Ward, Hồ Chí Minh, Vietnam
Job Description
MiTek® is a global provider of building solutions for the residential and commercial construction industries. Utilizing software, services, engineered products, and automated manufacturing equipment, MiTek partners with clients to accelerate their genius and deliver breakthroughs in building.
With a rich 60-year history and a network of 6,500 team members worldwide, MiTek pairs local expertise with global capabilities. As a Berkshire Hathaway (NYSE: BRK-A, NYSE: BRK-B) company since 2001, MiTek has a record of continuous growth and innovation.
Learn more at www.mii.com.
Key Responsibilities
Data Engineering & Modeling
- Design, implement, and own dimensional data models to support analytics, reporting, and downstream use cases.
- Review and validate data models to ensure consistency, scalability, and alignment with business requirements.
- Define and document data modeling standards, best practices, and guidelines for the organization.
- Continuously assess existing data models and propose improvements as business needs evolve
Data Integration & Pipelines
- Lead the design, development, and maintenance of data pipelines, primarily using Azure Data Factory (ADF) and related Azure data services.
- Build robust ingestion processes from a variety of sources, including databases, files, and third‑party REST APIs.
- Ensure pipelines are reliable, efficient, scalable, and cost‑effective.
- Work closely with other engineers to refactor and optimize existing pipelines and SQL transformations.
CI/CD, Automation & DevOps
- Implement and champion CI/CD practices for data pipelines, SQL scripts, and analytics assets.
- Use Git‑based version control to ensure reproducibility, auditability, and quality.
- Partner with DevOps and IT teams to improve deployment workflows and automation across environments.
Data Operations, Monitoring & Reliability
- Monitor end‑to‑end data flows, from raw ingestion through curated warehouse layers.
- Define data quality checks and ensure facts and dimensions are populated accurately and on time.
- Implement logging, alerting, and monitoring to proactively detect and resolve failures.
- Support root‑cause analysis and implement long‑term fixes for data‑related issues.
Requirements
Experience & Background
- Bachelor's degree in computer science, Data Engineering, Information Systems, or a related field (or equivalent practical experience).
- 4+ years of experience in data engineering, analytics engineering, or data warehousing roles.
- Demonstrated ownership of production data pipelines and data models in cloud environments.
Technical Skills
- Strong proficiency in designing and organizing data in Data Warehouse and/or Data Lake systems
- Proven experience designing and operating data processing pipelines, supporting both batch and streaming workloads.
- Broad experience working with multiple database technologies, including RDBMS, NoSQL, and graph databases (e.g., Azure SQL, Oracle, Neo4j, MongoDB, Cassandra, HBase).
- Strong programming foundation with proficiency in SQL and at least one general‑purpose language such as Python or Java, along with solid understanding of data structures and algorithms.
- Hands‑on experience with Azure data services, such as:
- Azure Data Factory (ADF)
- Azure SQL / Synapse Analytics
- Databricks (or similar transformation frameworks)
- Familiarity with CI/CD pipelines, Git, and infrastructure or analytics deployment workflows.
Professional & Soft Skills
- Strong analytical and problem‑solving mindset, with the ability to think structurally from problem to solution.
- Clear and effective communication skills, able to explain technical concepts to both technical and non‑technical audiences.
- Proven ability to work independently while collaborating across teams and time zones.
- Curiosity and willingness to continuously learn new tools, techniques, and best practices.