Cloud Data Engineer | Beacon Technologies

Cloud Data Engineer

Posted on September 29, 2021

Location

Boston, MA
13678

Position Details

Full-Time

Job Summary

Beacon Technologies is seeking a Cloud Data Engineer. This role is based in Boston, MA.

Job Description

The Cloud Data Engineer is a specialized role participating in designing and implementing systems on Public Cloud infrastructure to deliver more analytical and business value from a wide range of data sources. You will work with the team to design and develop high-performance, resilient, automated data pipelines, streams, and applications, adapting technologies for ingesting, transforming, classifying, cleansing and exposing data using creative design to meet objectives. Your broad experience with data management technologies will enable you to match the right technologies to the required schemas and workloads. The company's focus in on the AWS and GCP platforms, with a strong server less bias. They rely heavily on Python, PySpark, BigQuery and related technologies, and work in an Agile, DevOps team culture. You are expected to bring an array of specialized skills noted below, and to lead by learning.

Responsibilities:

  • Build and Maintain server less data pipelines in terabyte scale using AWS and GCP services – AWS Glue, PySpark and Python, AWS Redshift, AWS S3, AWS Lambda and Step Functions, AWS Athena, AWS DynamoDB, GCP BigQuery, GCP Cloud Composer, GCP Cloud Functions, Google Cloud Storage and others.
  • Integrate new data sources from enterprise sources and external vendors using a variety of ingestion patterns including streams, SQL ingestion, file and API.
  • Maintain and provide support for the existing data pipelines using the above-noted technologies.
  • Work to develop and enhance the data architecture of the new environment, including recommending optimal schemas, storage layers and database engines including relational, graph, columnar, and document-based, according to requirements.
  • Develop real-time/near real-time data ingestion from a range of data integration sources, including business systems, external vendors and partner and enterprise sources.
  • Provision and use machine-learning-based data wrangling tools like Trifacta to cleanse and reshape 3rd party data to make suitable for use.
  • Participate in a DevOps culture by developing deployment code for applications and pipeline services.
  • Develop and implement data quality rules and logic across integrated data sources.
  • Serve as internal subject matter expert and coach to train team members in the use of distributed computing frameworks and big-data services and tools, including AWS and GCP services and projects.

Required Experience and Skills: (Experience is expected to be hands-on, and not through team exposure alone)

  • Master’s degree in Computer Science, Mathematics, Engineering, or equivalent work experience.
  • Four years working with datasets with very high volume of records or objects.
  • Expert level programming experience in Python and SQL.
  • Two years working with Spark or other distributed computing frameworks (may include: Hadoop, Cloudera).
  • Four years with relational databases (typical examples include: PostgreSQL Microsoft SQL Server, MySQL, Oracle).
  • Two years with AWS services including S3, Lambda, Redshift, Athena, S3.
  • One year working with Google Cloud Platform (GCP) services, which may include any combination of: BigQuery, Cloud Storage, Cloud Functions, Cloud Composer, Pub/Sub and others (this may be via POC or academic study, though professional experience is preferred).
  • Some knowledge of AWS services: DynamoDB, Step Functions.
  • Experience with contemporary data file formats like Apache Parquet and Avro, preferably with compression codecs, like Snappy and BZip.
  • Experience analyzing data for data quality and supporting the use of data in an enterprise setting.

Desired Experience and Skills:

  • Streaming technologies (e.g.: Amazon Kinesis, Kafka).
  • Graph Database experience (e.g.: Neo4j, Neptune).
  • Distributed SQL query engines (e.g.: Athena, Redshift Spectrum, Presto).
  • Experience with caching and search engines (e.g.: ElasticSearch, Redis).
  • ML experience, especially with Amazon Sagemaker, DataRobot, AutoML.
  • IAC coding tools, including CDK, Terraform, Cloudformation, Cloud Build.

About Beacon Technologies