Senior Data Engineer (Analytics Engineering)
$ 8,000 - $ 11,000 / month
Apply on
Original
Simplified
Secretlab is an international gaming chair brand seating over a million users worldwide, with our key markets in the United States, Europe and Singapore, where we are headquartered.
You will be a Data or Senior Data Engineer in our team, responsible for bringing Secretlab’s data infrastructure and analytics. The demand for clean serviceable stream/batch data has outstripped our ability to handle out-of-the-box solutions; the demand for information has grown rapidly here at Secretlab. We’re looking for Data Engineers who are excited about bringing a start-up data culture to a new level.
To be successful
- Design Data Model & Architecture for the data warehouse & other data systems
- Develop star schema and analytics and ML layers with Airflow, Data Built Tool, etc.
- Develop Standard Template Packages for the rest of the team (e.g., logger templates and AWS, etc.)
- Maintain a reliable data pipeline by following best practices (avoid accruing technical debt by unit testing, logging etc.).
- Have MVPs and balance UI/UX/Function/Reliability - avoiding over or under-engineering pitfalls.
- Build to Order pipelines need to be delivered against feature requests & user stories
- Be comfortable with SaaS like Fivetran, DBT, S3, and Snowflake
- Communicate clearly and concisely about all the aforementioned requirements as well as guide junior team members
- PRs are bite-sized and easy to review with a 50% of PRs to clear within 1 review & 90% after 2 reviews
What your week will look like
- AGILE Sprints with Business Intelligence Team to prune & prioritize backlog (Ops)
- Develop data models (star-schema, event-based data marts, etc.)
- Exploratory data analysis of the data based on business requirements
- Running comprehensive testing to ensure data quality
- Code reviews are part of the Data Team’s production process
- Establish connectors to downstream BI / DWH tools
- Handle data processing errors and failures as they surface
- Contributing process improvements and tool selections in the weekly retro (start, stop, continue)
Requirements
Technical
- SQL, Python Proficient
- Familiarity with Git,
- Building human-fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning, and ensuring a deterministic pipeline
Personality
- Real passion for data, new data technologies, and discovering new and interesting solutions to the company’s data needs
- Upfront and Candid Personality – someone who is eager to contribute to the continuous improvement of both team and process; open to accepting and giving feedback (especially in retros)
- Honest and Pragmatic – someone who access their capabilities honestly without embellishment
Bonuses
- Prior experience with scaling up start-ups
- Experience with DBT development is a strong plus
- Apache-Spark
Similar Jobs