I am a cloud engineer with 2 years experience designing and implementing complex cloud solutions on AWS and Azure, and over 5 years experience working in the world of big data.
Maya Iuga
+40752262829
maya_iuga@yahoo.ro
AiCore • Mar 2023 - Present
- Managed the development and deployment of AWS and Azure solutions that met the strategic needs of a government-funded contract serving 250 clients and generating ~£2M in revenue. - Designed and facilitated the company’s data analytics platform migration to AWS, consolidating data from diverse sources into a unified destination & reducing querying time for up to 72 %. - Created custom Python scripts and GitHub Actions to automate daily updates from data sources, reducing manual work and increasing efficiency by 200 %. - Led the development of an end-to-end DevOps pipeline used by ~125 clients, utilizing Docker and Kubernetes for streamlined application deployment. - Automated new application version deployments through Azure DevOps and Terraform, reducing deployment time by 52%. - Migrated an SQL Server database to Azure, implementing disaster recovery solutions that increased availability from 99.95% to >99.99%. Improved monitoring processes and regular maintenance checks. - Skills: · AWS · Azure · Python · SQL · Azure DevOps · Terraform · Linux · Kubernetes · Docker · Git · Linux
AiCore • Sept 2022 - Mar 2023
- Trained and supplied 180 clients with technical support in cloud & data engineering technologies. Decreased client training time by 52%. - Managed a team of 8 engineers with an annual turnover rate of 12.5%. Received promotion ahead of schedule. - Spearheaded the migration of a data processing pipeline to AWS, resulting in a 95 % increase in positive outcomes for clients on this project. - Automated the deployment of over 120 AWS & Databricks client accounts using CloudFormation. - Increased security for shared AWS resources by implementing strict IAM policies. - Skills: · AWS · Python · SQL · Databricks · Spark · NoSQL · CloudFormation · Airflow · Kafka · Linux
EPFL • Nov 2020 - Mar 2022
- Implemented and developed a data analysis pipeline using Python, serving the analytical requirements of a team of 9 scientists. - Analyzed big data, including over 600GB of time series data and 80GB of video data to derive insights in neural computation research. - Skills: · Python · Git · Linux · Time Series Data · Data Analysis · GLMs · Deep Learning
- Designed and lead the migration of the company's data analytics platform to AWS, reducing querying time by up to 70%. - Consolidated data from diverse sources (internal portal, Google Sheets, HubSpot) into a unified destination on AWS. - Utilized custom written Python scripts and CI/CD pipelines using GitHub Actions to perform daily data uploads to both DynamoDB and Amazon RDS. - Performed real-time updates from the client portal, leveraging an EventBridge-Segment integration to update DynamoDB tables. - Perfomed ETL transformations on data from all sources using AWS Glue. Stored transformed & compressed data in S3. - Utilized Athena to query data and Amazon QuickSight to perform visualisations.
- Integrated a Python Flask web application in a comprehensive DevOps pipeline. - Containerized the web application using Docker. - Defined Kubernetes deployment manifests for the application & implemented a rolling update strategy for application updates. - Leveraged Terraform to provision Azure resources & deployed Kubernetes manifests on an AKS cluster. - Utilized Azure DevOps to automate Docker image build & deployment of new application versions on the AKS cluster. - Monitored application health using Azure Monitor.
- Developed an end-to-end AWS-hosted data engineering pipeline. Introduced a Lambda architecture, supporting both batch and stream processing. - Created a REST API using API Gateway, and used MSK and MSK Connect to distribute data from the REST API to a S3 Data Lake. - Extracted batch data from S3 and cleaned & transformed it in Databricks using Spark queries. - Used Airflow on MWAA to orchestrate Databricks workloads. - Streamed data in real-time using AWS Kinesis & performed near real-time analysis using Spark on Databricks.
- Migrated a production environment database to Azure SQL Database, using Azure Data Studio. - Generated backups of production database & restored as development database for safe testing and experimentation. - Set up disaster recovery solutions for production database. Conducted tests of failover and fallback procedures. - Integrated Microsoft Entra ID authentication with production Azure SQL Database for identity management.
Verify Here • Issued by Amazon Web Services
This certification showcases my proficiency in designing scalable and cost-effective solutions, implementing resilient architecture, and leveraging best practices to meet business requirements. It demonstrates my ability to architect solutions that align with industry standards, optimize performance, and ensure security in diverse cloud environments.
Verify Here • Issued by Databricks
This credential attests to my expertise in leveraging Databricks for effective data engineering solutions. It validates my ability to design and implement data engineering workflows, optimize data pipelines, and enhance overall data quality.
Verify Here • Issued by Microsoft
This certification validates my foundational knowledge of Azure cloud services and fundamental cloud concepts. It demonstrates my understanding of core Azure services, pricing, and the basics of cloud security and compliance.
EPFL, Lausanne, Switzerland
UCL, London, United Kingdom
First-Class Honours