Data Engineer (Mid/Senior)
SquareDev›
📍Athens, GR
Posted 1mo ago · via workable
Apply on workable→Job Description
Why are you looking for a job?
If your answer ticks all the boxes, this could be the start of a great collaboration.
- You have a curious mind - You won't understand what we're talking about if you don't.
- You want to learn more around technology - You won't survive if you don't.
- You want to make the world a bit better - We don’t like you if you don’t.
We happen to be just like that as well. We like hacking things here and there (you included) and create scalable solutions that bring value to the world.
Squaredev? 🐿️
We use state-of-the-art technology to build solutions for our own customers and for the customers of our partners. We make sure we stay best-in-class by participating in research projects across Europe, collaborating with top universities and enterprises on AI, Data, and Cloud.
Role overview
We are looking for experienced Data Engineers to join our team and work on enterprise-scaledataand AI projects.
You will be part of projects built either on IBM Watson / Microsoft Fabric technologies or on Databricks-environments, collaborating closely with data scientists and software engineers to deliver AI-ready datasets and analytics solutions.
Requirements
The ideal candidate will be responsible for:
- Designing and implementing data pipelines (batch and streaming) for analytics and AI workloads.
- Building and maintaining data lakes / warehouses (One Lake, BigQuery, Delta Lake, or any similar).
- Developing and optimizing ETL/ELT workflows using tools like Spark Jobs, dbt, Airflow or Prefect.
- Ensuring data quality, observability, and governance across all pipelines.
- Working closely with data scientists and software engineers to deploy and maintain AI-ready datasets.
To excel in this role, you'll need:
- At least 3 years of relevant work experience.
- Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
- Experience working in cloud platforms (e.g. MS Azure, AWS, GCP etc.).
- Hands-on experience with IBM Watson and/or BAW, or experience on Microsoft Fabric or Databricks projects.
- Strong experience in SQL and Python (PySpark or similar).
- Hands-on experience with data modeling, ETL frameworks and data orchestration tools.
- Familiarity with distributed systems and modern data platforms (Spark, Databricks, Fabric, Snowflake, or BigQuery).
- Understanding of data lifecycle management, versioning, and data testing.
- Solid grasp of Git and CI/CD workflows.
- Strong communication skills in English.
Nice to have:
- Knowledge of vector databases (pgvector, Pinecone, Milvus) or semantic search pipelines.
- Interest / knowledge in LLMs, AI pipelines.
- Familiarity with data catalogs, lineage tools, or dbt tests.
- DevOps familiarity (Docker, Kubernetes, Terraform).
Benefits
🌍 Hybrid working model: A flexible work approach that supports balance and focus.
🍽️ Ticket restaurant card: A monthly food budget to keep you energized every day.
🏥 Private health insurance: Solid private health coverage for peace of mind.
🏝️ 5 extra personal days off: Extra days off to recharge.
❤️🩹 Extra sick leave days: Paid sick days, because your health always comes first.
🤖 AI coding assistant: An AI sidekick to help you code faster and smarter.
💻 Apple MacBook Pro: A powerful MacBook Pro so you can truly do your magic.
Details
- Department
- AI & Data
- Work Type
- hybrid
- Locations
- Athens, GR
- Posted
- February 26, 2026
- Source
- workable