Job Description
A leading fintech company in Washington is seeking a Senior Data Engineer to lead the design and development of large-scale data processing systems that power their financial platforms. You will work in a hybrid model, spending part of your time collaborating with cross-functional teams on-site while enjoying remote flexibility. This is a critical role in ensuring data is stored, processed, and delivered efficiently and securely.
Required Skills & Experience
- 6+ years of experience in data engineering, with a focus on building large-scale data pipelines.
- Expertise in Apache Spark, Kafka, and Airflow for managing big data workflows.
- Strong experience in SQL, Python, and Scala for data manipulation and transformation.
- In-depth knowledge of data lakes and cloud storage solutions, such as AWS S3 or Azure Data Lake.
- Familiarity with data warehousing tools like Amazon Redshift, Google BigQuery, or Snowflake.
Daily Responsibilities
- Design, develop, and maintain robust ETL pipelines using Spark and Kafka to process massive data volumes.
- Collaborate with data scientists, machine learning engineers, and product teams to ensure the right data is available for analysis and reporting.
- Implement best practices for data management, security, and governance in compliance with industry standards.
- Monitor and optimize performance across the data pipeline, ensuring scalability and efficiency.
The Offer
- Full-time hybrid role with a salary range of $150k – $170k/year, depending on experience.
- Benefits include health insurance, retirement plans with matching contributions, paid time off, and performance bonuses.
- The company offers opportunities for professional growth, including leadership development programs and certifications.
Note: Candidates must be authorized to work in the US without sponsorship.