DevOps Engineer at Reli Group

9 months ago DevOps & System Administration Middle Full-Time

We need one who has 7 years of experience with programming and infrastructure as code scripting.


Requirement 

  • 5+ years of experience working in DevOps engineering role
  • 7 years of experience with programming and infrastructure as code scripting.
  • 5 years of experience with AWS and Databricks strongly preferred
  • Experience building a proper path to production leveraging multiple lifecycles, testing, integration, automation
  • Experience with software development practices like DevOps and CI/CD tool chains (i.e., Jenkins, Azure Developer Services, GitHub)
  • Experience with container orchestration systems (i.e., Docker, Kubernetes, Cloud Foundry, Azure Kubernetes Service, GitHub)
  • Experience running, deploying, and maintaining production cloud infrastructure in AWS (Terraform, CloudFormation, LakeFormation etc.,)
  • Experience with configuration management tools
  • Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift and Databricks
  • Experience with developing solutions on cloud computing services and infrastructure in the data and analytics space
  • Prior experience with deploying SAS Viya in AWS Workspaces
  • Experience operating within a database reliability engineering (DRE) and/or systems reliability engineering (SRE) role
  • Bachelor Degree in Information Technology, Business Administration or Health Information Management or approved equivalent combination of education and experience required
  • Certification in one or more of the following technologies preferred: Cloud, mobile, Database, Data Engineering, Data Analytics, Big Data, BI, Data Science, Machine Learning, or Artificial Intelligence


Description

  • Communicate effectively while working closely with architects, system administrators, project management and administrative/operational staff to understand, define, prototype, publish and maintain database artifacts
  • Mix technical skills (Python, SQL, Apache Spark) with data pipeline design while configuring intuitive, efficient and automated data pipelines using the technology stack available at MIDAS/CMS
  • Contributing to our teamโ€™s culture of continuous, dependable, sustainable analytics and software development. We do light-weight data governance, version control, confluence pages, automated testing/build/deployment-the works
  • Working with architects, database engineers, BI Users and Quality Assurance specialists when developing data pipelines with a purpose. Candidates should be willing to create proof of concepts to support their recommendations
  • Be detail/solution-oriented and pay close attention to workflows and outcomes, and collaborate with internal departments, external organizations, and other developers and engineers to refine our suite of analytics products
  • Troubleshooting data models, bugs, system configurations, and app/data integrity across many tools in a complex, technical environment
  • Be a sprinter while applying agile software development principles to daily work (including JIRA and Github). Use CMS tools such as JIRA and Confluence to support the chosen framework and store generated artifacts
  • Provide leadership to develop and execute highly complex and large-scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs
  • Responsible for building, deploying, and ensuring all database infrastructure is available 24/7/365. Experience with AWS and Databricks strongly preferred
  • Develop automation and tooling to increase operational efficiency while ensuring system reliability and security
  • Manage multiple competing priorities in a fast-paced, deadline-oriented environment.
  • Maintain thorough and well-written documentation.
  • Participate in live event support, root cause analysis and troubleshooting and on-call rotation
  • May provide oversight and direction to junior team members
  • Effectively communicate ideas and data verbally and in writing
  • Provide input to highly complex decisions that impact future enhancements
  • Seek input from multiple constituents and stakeholders to drive innovative solutions
  • Incorporate feedback and ensure decisions are implemented swiftly to yield high quality execution
  • Effectively negotiate and resolve conflicts in a constructive manner
  • Build strong relationships and collaborate effectively with stakeholders
  • Serve as a role model, demonstrating respect and inclusion, creating a culture that fosters innovation
  • Support and mentor engineers, empowering them to make effective decisions

5+ years of experience working in DevOps engineering role

7 years of experience with programming and infrastructure as code scripting.

5 years of experience with AWS and Databricks strongly preferred

Experience building a proper path to production leveraging multiple lifecycles, testing, integration, automation

Experience with software development practices like DevOps and CI/CD tool chains (i.e., Jenkins, Azure Developer Services, GitHub)

Experience with container orchestration systems (i.e., Docker, Kubernetes, Cloud Foundry, Azure Kubernetes Service, GitHub)

Experience running, deploying, and maintaining production cloud infrastructure in AWS (Terraform, CloudFormation, LakeFormation etc.,)

Experience with configuration management tools

Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift and Databricks

Experience with developing solutions on cloud computing services and infrastructure in the data and analytics space

Prior experience with deploying SAS Viya in AWS Workspaces

Experience operating within a database reliability engineering (DRE) and/or systems reliability engineering (SRE) role

Bachelor Degree in Information Technology, Business Administration or Health Information Management or approved equivalent combination of education and experience required

Certification in one or more of the following technologies preferred: Cloud, mobile, Database, Data Engineering, Data Analytics, Big Data, BI, Data Science, Machine Learning, or Artificial Intelligence

EEO Employer:

RELI Group is an Equal Employment Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, national origin, ancestry, citizenship status, military status, protected veteran status, religion, creed, physical or mental disability, medical condition, marital status, sex, sexual orientation, gender, gender identity or expression, age, genetic information, or any other basis protected by law, ordinance, or regulation.

HUBZone:

RELI Group is an established SBA certified HUBZone and 8(a) small business. We encourage all candidates who live in a HUBZone to apply. You can check to see if your address is located in a HUBZone by accessing the SBA HUBZone Map.

๐ŸŒ World Wide devops AWS python sql azure terraform
๐ŸŽ‰ Let Employers Find You!

Employers will see your profile when they are sending a job in your skill.


Create Your Profile   (simple)