Data Engineering Lead

Job expired!

Exciting Career Opportunity: Data Engineering Lead at JLL Technologies

At JLL, we support the Whole You, both personally and professionally. Our team members are shaping the future of real estate by delivering world-class services, advisory, and innovative technology to our clients. We are committed to hiring the best and most talented individuals in our industry and supporting them through professional growth, flexibility, and personalized benefits to balance life in and outside of work. Whether you have deep experience in commercial real estate, skilled trades, or technology, or you’re seeking to apply your relevant experience to a new industry, JLL empowers you to shape a brighter path forward and thrive both professionally and personally.

About JLL and JLL Technologies

JLL is a leading professional services firm specializing in real estate and investment management. Our vision is to reimagine the world of real estate, creating rewarding opportunities and amazing spaces where people can achieve their ambitions. In doing so, we aim to build a better future for our clients, team members, and communities.

JLL Technologies is a specialized division within JLL. Our mission is to bring technology innovation to commercial real estate by delivering unparalleled digital advisory, implementation, and services solutions to organizations globally. Our goal is to leverage technology to enhance the value and liquidity of buildings worldwide while improving the productivity and well-being of those who occupy them.

Job Role: Data Engineering Lead

The JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL’s data strategy. We are seeking a self-starting Lead Data Engineer to join our diverse and fast-paced environment. As an individual contributor, you will design and develop strategic data solutions using the latest technologies and patterns. This global role requires collaboration with the broader JLLT team at the country, regional, and global levels, utilizing your in-depth knowledge of data, infrastructure, and technology.

Responsibilities:

  • Design, architect, and develop solutions leveraging cloud big data technology to ingest, process, and analyze large, disparate data sets exceeding business requirements.
  • Develop systems that cleanse, normalize, and structure diverse datasets and build data pipelines from various internal and external sources.
  • Identify performance bottlenecks in data pipelines, ETL processes, and queries, implementing optimization strategies to improve system performance.
  • Define and implement data architecture best practices to ensure the scalability, availability, and security of data infrastructure, including data lakes and warehouses.
  • Collaborate with data scientists and analysts to design and implement data models supporting complex analytical and reporting requirements.
  • Conduct data profiling, validation, and quality checks to ensure data integrity and consistency.
  • Design and develop scalable, reliable, and efficient data pipelines using technologies like Apache Spark, Airflow, and cloud-based services (e.g., ADLS, AWS S3, Google Cloud Storage).
  • Optimize ETL processes for smooth and timely ingestion of large volumes of structured and unstructured data from various sources.
  • Develop POCs to influence platform architects, product managers, and software engineers, and validate solution proposals.
  • Establish and enforce data governance standards, policies, and procedures for compliance and data security.
  • Develop and maintain documentation related to data flows, dictionaries, lineage, and integration processes.
  • Mentor team members and contribute to the organization’s growth.

Qualifications:

  • Bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics, or a related discipline.
  • 8+ years of overall work experience as a hands-on engineer, curious about technology and adaptable to change.
  • In-depth knowledge of Cloud Computing (AWS, Azure), Micro Services, Streaming Technologies, and Security.
  • 3+ years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, and Azure Data Lake Storage.
  • Experience in designing and developing data management solutions using relational and non-relational databases.
  • Expertise in building data pipelines from various sources for KPI and metrics development with high data quality.
  • 3+ years of experience with source code control systems and CI/CD tools.
  • Team player, self-motivated, capable of executing multiple projects in a fast-paced environment.

What You Can Expect from Us: