Data Engineer at Transporeon GmbH
We need one who is a quick learner and strong problem solver with attention to detail, capable of taking on loosely defined problems as well as breaking down and simplifying complex technical concepts.
You are ready, if you ...
- want to join us on our mission to digitize the logistics world and use your expertise, skills and passion for data to help customers make smarter and faster data driven decisions that help in reducing CO2 emissions and todays major market inefficiencies
- are a quick learner and strong problem solver with attention to detail, capable of taking on loosely defined problems as well as breaking down and simplifying complex technical concepts
- gathered industry experience in software or data engineering with Python, Java or Scala and have at least basic knowledge of Linux and container virtualization (e.g. Docker)
- enjoy automation and already applied some of the concepts behind DevOps
- have a track record of implementing ETL/ELT workflows in the cloud using SQL, Apache Spark (ideally in the form of AWS Glue) or the wider Hadoop ecosystem in general
- have strong skills in designing and evolving storage schemas as well as in efficiently querying the stored content, preferably in context of a larger data lake or warehouse (e.g. Redshift)
- have a basic understanding of the general ecosystem, distributed computing and stream processing in AWS
At Transporeon we embrace transformation and change in total sync with one another. We rethink, reinvent and rework ideas from one moment to the next – as many times as is necessary to get the job done right. That’s how we respond to the new challenges that we face each and every day. And regardless of whether you are just starting your career or are already a pro – we believe you can be the transformation. Are you ready?
Our Data Team is ready and waiting with ...
- a unique chance to leave a footprint for years to come in our data landscape: be part of design, development and delivery of our next generation data platform
- various tasks in building and maintaining customized, scalable end-to-end data pipelines that empower multi-faceted analytics and data products
- challenging tasks around ensuring machine learning-grade data quality through continuous improvements in data cleaning and enrichment
- constantly evolving internal tooling, for which you can bring in your own ideas
- a possibility to advance the roll-out of analytical self-services and machine learning models within the greater organization
- opportunities for collaboration on various projects with Data Scientists, other engineering teams as well as internal key stakeholders and data consumers
- the stability of an unlimited contract in a successful company, but at the same time a chance to make a real impact as both we and our customers are becoming more data-driven from day to day.