Wala nang mga aplikasyon ang tinatanggap para sa trabahong ito
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures.
- Proven work experience as a Cloud Data Engineer with 5+ years of experience
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing 'big data' data pipelines, architectures, and data sets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- A successful history of manipulating, processing and extracting value from large, disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
- Experience with relational SQL and NoSQL databases, including Aurora Postgres and Amazon Dynamo DB
- Experience with AWS services such as Lake Formation, Kinesis, Firehose, Data Pipeline, Lambda, Glue, Athena
- Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
- Bachelor's degree (or equivalent) in computer science, information technology
- AWS Certification