This job listing expired on Apr 16, 2021
Tweet

Position Intro

Linden Lab, a pioneer in the creation of virtual worlds, has been in business since 2003 and is best known for “Second Life”, the world’s largest user-generated economy.

Our mission is to build new economies by enabling our partners to compensate their content creators for the digital goods and services they produce. Here on the Ecom engineering team, we accomplish this by building a growing set of financial capabilities on top of our regulatory licenses. Some of these capabilities include processing payments and payouts, verifying user identities, detecting fraud, and enforcing sanctions. Additionally, these systems have an expanding set of tools around them to be used by our partners and customers.

The Data Engineering team supports everything from business analytics and internal tools support to partner and regulatory reporting, as well as real-time fraud mitigation. The Data Engineer will contribute by helping build out our real-time data pipeline, developing real-time and batch ETL processes, and modeling data. Come join a friendly, seasoned team and a great company as we change the world.

Primary Functions

The data engineer will be responsible for helping build out our real-time data pipelines, developing real-time and batch ETL processes, and modeling data.

Responsibilities

  • Real-time Infrastructure - Maintain, improve and troubleshoot real-time data processing systems on AWS and GCP, and evaluate new technologies.
  • Real-Time Reporting Database – Set up and maintain processes to replicate our operational data into our reporting database. Dependability and accuracy is crucial for this real-time reporting database.
  • Real-Time ETL: We are using Streamsets pipelines and Apache Airflow (Java or Python) to process and transform real-time data for our reporting database and data warehouse respectively. Develop and maintain dependable and performant ETL processes to provide real-time data for our support team and our clients.
  • Batch ETL - Build and maintain code to populate our Google BigQuery data warehouse with scheduled batch updates from logs and data generated by our backend data.
  • External data – Set up processes to download data from external sources and incorporate that data into our reporting data models.
  • Data Modeling - Work closely with our Data Architect, Product Managers, and Analysts to design and model new tables to meet analytics and reporting needs in support of new features and new requirements.
  • Operations Liaison – Work closely with our systems engineers, and our vendors support to quickly assess the impact of production system changes to existing reporting and data warehouse processes.
  • Other duties may be assigned

Knowledge, Skills, Abilities

To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.]

  • Advanced SQL and relational database concepts.
  • Solid experience with either Python or Java.
  • Fluency in Linux; some system admin experience will be useful.
  • Desire to work in a collaborative, entrepreneurial environment on really interesting problems
  • Strong attention to detail and the ability to effectively QA own work product to make sure that reporting and warehouse data is complete and accurate

Education

  • Bachelor’s Degree in a computer/database related field or equivalent professional experience

Experience

  • 5+ years experience in software engineering or database development or administration
  • Experience working with real-time data processing for reporting purposes. Experience working with message queues like Kinesis, Kafka, Pubsub, etc., and with tools using that data. Prior experience with Google DataFlow and/or Spark is a big plus.
  • Experience with row-based databases such as Mysql, Postgres, etc.
  • Experience working with large-scale data warehouse platforms such as BigQuery, Redshift, Snowflake, Teradata, etc.
  • Experience with non-relational document stores such as ElasticSearch is a big plus.

Travel Requirements

No travel required

Physical Demands & Work Environment

The physical demands and work environment described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

While performing the duties of this job, the employee is regularly required to use the computer and communicate with coworkers in an office environment. The employee frequently is required to stand or sit to complete work and may occasionally lift and/or move up to 10 pounds.

Linden Lab seeks to maintain a diverse and welcoming workplace; therefore candidates from all backgrounds are encouraged to apply.

Fine Print :

The statements herein are intended to describe the general nature and level of work being performed by employees in this job. They are not intended to be construed, as an exhaustive list of all responsibilities, duties and skills required of personnel so classified.