Experience: 5+Years
Work Mode: Remote
Notice Period: 0 to 30 Days
Job Description:
· 5+ years of experience in Data Engineering / Backend Data Systems
· Expert-level Python in production environments, including asyncio, type hints, clean code practices, and testing
· Hands-on experience integrating REST APIs at scale, including rate limiting, quota management, batching, retries, and failure handling
· Hands-on experience with YouTube Data API v3 or other high-volume third-party APIs
· Strong experience with SQL and PostgreSQL, including query optimization, indexing, partitioning, and performance tuning
· Experience working with time-series databases such as TimescaleDB, InfluxDB, or similar
· Hands-on experience building large-scale data ingestion pipelines
· Experience with distributed job processing systems such as Celery, Redis, RabbitMQ, or equivalent
· Experience using workflow orchestration tools like Apache Airflow, Prefect, or Dagster
· Strong understanding of caching strategies using Redis or equivalent technologies
· Hands-on experience with Docker and containerized application deployments
· Experience implementing monitoring, logging, and alerting for production systems
· Solid understanding of data modeling, system design, scalability, and performance optimization
· Experience working across the full SDLC in Agile / SCRUM environments
Nice to Have
· Experience building analytics, tracking, or monitoring platforms
· Experience designing trend detection or anomaly detection algorithms
· Experience working with API quota–constrained systems
· Experience with cloud platforms (GCP preferred; AWS or Azure acceptable)
· Experience with Kubernetes in production environments
· Familiarity with frontend frameworks such as React or Next.js
· Experience working on high-scale systems (hundreds of thousands to millions of entities)