Slalom
Current Opportunities

Careers at Slalom

Data Engineer
JO-1708-1676
San Francisco
No

Data Engineer Consultant

As a Data Engineer for Slalom Consulting, you'll work in small teams to deliver innovative solutions on Amazon Web Services, Azure and Google Cloud using core data warehousing tools, Hadoop, Spark, Event Stream platforms, and other big data related technologies. In addition to building the next generation of data platforms, you'll be working with some of the most forward-thinking organizations in data and analytics. 

Who are you?

  • You’re a smart, collaborative person who is passionate about technology and driven to get things done.
  • You’re not afraid to be bring your authentic self to work.
  • You embrace a continuous learner mentality.

What technologies will you be using?

Everything. It’s about using the right technologies to solve problems and playing with new technologies to figure out how to apply them intelligently. We work with technologies across the board.

Why do we work here?

Each of us came to Slalom because we wanted something different. We wanted to make a difference, we wanted autonomy to own and drive our future while working with some of the best companies in San Francisco leveraging the coolest technologies. At Slalom, we found our people.

What does our recruitment process look like?

Our process is highly personalized. Some candidates complete their process in one week, others can take several weeks or even months. Deciding to take a new job is a big decision, so regardless how long or short the process may be for you, the most important thing is that you find your dream job.

Qualifications:

  • Bachelor’s degree in Computer Engineering, Computer Science or related discipline
  • 5-7+ years relevant experience
  • Understand different types of storage (filesystem, relation, MPP, NoSQL) and working with various kinds of data (structured, unstructured, metrics, logs, etc.)
  • 4+ years of experience working with SQL
  • Experience with setting up and operating data pipelines using Python or SQL
  • 2+ years of experience working on AWS, GCP or Azure
  • Experience working with relational databases
  • Strong analytical problem-solving ability
  • Great presentation skills
  • Great written and verbal communication skills
  • Self-starter with the ability to work independently or as part of a project team
  • Capability to conduct performance analysis, troubleshooting and remediation
  • Experience working with data warehouses such as Redshift, BigQuery and Snowflake
  • Exposure to open source and cloud specific data pipeline tools such as Airflow, Glue and Dataflow

Would you like to apply to this job?

Log In if already registered

otherwise

Please Register


Previous MonthNext Month
SunMonTueWedThuFriSat