JobAgent
← Back to jobs

Data Engineer

Aderant

📍Atlanta, GA, US

hybrid

Posted Today · via workday

Apply on workday

Job Description

Aderant is a global industry leading software company providing comprehensive business management solutions for law firms and other professional services organizations with a mission to help them run a better business. We are motivated by a collective desire to drive the legal industry to the forefront of innovation. With over 2,500 clients around the world, including 95 of the top AmLaw 100 firms, we are changing the outside perception of the legal sphere; where there was once resistance to modernization, we are creating a culture that embraces new ideas and technology.

At Aderant, the “A” is more than just a letter. It is a representation of how we fulfill our foundational purpose, serving our clients. It embodies our core values and reminds us that to achieve success, every day must start with the “A”. We bring the “A” to life by fostering a culture of innovation, collaboration, and personal growth. We encourage our diverse teams to bring their whole selves to work – ideas, experience, and passion – to drive our mission forward.

Our people are our strength.

About the Role

We are seeking a Data Engineer to contribute to the development and optimization of our cloud-native data platform. You will be responsible for implementing scalable ETL pipelines, supporting data infrastructure initiatives, and working with modern data lakehouse architectures. This is a hands-on role requiring strong technical skills in AWS data services, distributed computing, and data engineering best practices.

Responsibilities

Data Pipeline Development

  • Build and maintain production-grade ETL pipelines using AWS Glue, PySpark, and Apache Iceberg
  • Implement data transformations within our medallion-based data lakehouse (Bronze/Silver/Gold tiers)
  • Develop data models following dimensional modeling patterns (Fact/Dimension tables)
  • Write efficient, maintainable Python and SQL code for data processing
  • Support data quality checks and validation processes

Hands-On Engineering

  • Develop reusable Python modules and utilities for data processing tasks
  • Implement event-driven data workflows using Step Functions, Lambda, and SQS
  • Optimize Spark jobs for performance and cost efficiency under guidance from senior engineers
  • Work with data serialization formats (Parquet, Avro, JSON) for efficient storage and processing
  • Participate in code reviews and incorporate feedback to improve code quality

Collaboration & Growth

  • Work closely with senior engineers to implement architectural designs and technical solutions
  • Partner with Data Scientists and Analytics Engineers to understand and fulfill data requirements
  • Collaborate with Platform and DevOps teams on deployment and monitoring
  • Contribute to documentation and knowledge sharing within the team
  • Continuously learn and adopt best practices in data engineering

Required Qualifications

Experience

  • 1-3 years of experience in data engineering or related roles
  • Experience building or contributing to data pipelines in production or project environments
  • Exposure to cloud data platforms (AWS preferred)

AWS Data Engineering Skills

Practical experience or strong foundational knowledge of AWS data services including:

  • Data Processing: AWS Glue, Lambda, Step Functions
  • Data Storage: S3, basic familiarity with DynamoDB and/or Redshift
  • Data Movement: SQS, basic ETL patterns

Technical Skills

  • Strong proficiency in Python and SQL (Spark SQL, T-SQL, or similar)
  • Solid experience with Apache Spark (PySpark) for data processing
  • Working knowledge of data lake table formats (Apache Iceberg, Delta Lake, or Apache Hudi)
  • Understanding of dimensional modeling and data warehouse concepts
  • Experience with version control (Git) and CI/CD concepts
  • Familiarity with data serialization formats (Parquet, Avro, JSON)

Core Competencies

  • Ability to write clean, maintainable, and well-documented code
  • Strong problem-solving skills and attention to detail
  • Effective communication skills and ability to work collaboratively
  • Self-motivated with ability to work independently when needed
  • Eager to learn new technologies and best practices

Preferred Qualifications

  • Experience with medallion architectures or tiered data processing patterns
  • Familiarity with infrastructure as code tools (Terraform, CloudFormation)
  • Understanding of CDC (Change Data Capture) patterns
  • Knowledge of data validation libraries (Pydantic, Great Expectations)
  • Experience with observability tools (CloudWatch, OpenTelemetry)
  • Exposure to data governance and metadata management concepts
  • Background in legal, financial, or enterprise SaaS domains
  • Experience with async Python (asyncio, aioboto3)

Technical Environment

You will work with:

  • Languages: Python, SQL, Spark SQL
  • Compute: AWS Glue 5.0, Lambda, Step Functions
  • Storage: S3, DynamoDB, Redshift Serverless
  • Formats: Apache Iceberg, Parquet, JSON
  • Orchestration: AWS Step Functions, EventBridge
  • CI/CD: GitHub Actions, multi-environment deployments
  • Observability: CloudWatch, OpenTelemetry, custom metrics pipelines

Details

Work Type
hybrid
Locations
Atlanta, GA, US
Posted
April 15, 2026
Source
workday