Engineer II, Data Engineering

Neiman Marcus
Irving, Texas, United States
18 Feb 2022
08 Sep 2022
This is an exciting opportunity to be part of the Data & Analytics Delivery Organization at Neiman Marcus. Neiman Marcus is going through a digital transformation and i nsights from our data are driving the transformation of Neiman Marcus Group into a Luxury Customer Platform and providing the best customer experience across all our brands.

We are looking for a Data Engineer that will have the unique combination of business acumen needed to interface directly with key stakeholders to understand the problem along with the skills and vision to translate the need into a world-class technical solution using the latest technologies. You will be in a hands-on role and responsible for building data engineering solutions for NMG Enterprise using cloud-based data platform. You will provide day-to-day technical deliverables and participate in technical design, development, and support for data engineering workloads.

Job Requirements:
  • Understand and Analyze data from multiple data sources and develop technology to integrate the enterprise data layer
  • Create automated pipelines to ingest and process structured and unstructured data from source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset
  • Work activity includes processing data sets, leveraging technologies used to process these disparate data sets and understanding the correlations as well as patterns that exist between these different data sets
  • Implement orchestrations of data pipelines and environment using Airflow
  • Implement custom applications using the Kinesis, Lambda and other AWS toolset as required to address streaming use cases
  • Implement automation to optimize data platform compute and storage resources
  • Develop and enhance end to end monitoring capability of cloud data platforms
  • Participate in educating and cross training other team members
  • Provide regular updates to all relevant stakeholders
  • Participate in daily scrum calls and provide clear visibility to work products

  • BS in Computer Science or related field
  • 3+ years of experience in the data and analytics space
  • 3+ years of RDBMS concepts with strong data analysis and SQL experience
  • Must have strong Python skills, along with lambdas and Airflow Dag processing.
  • 2+ years of Linux OS command line tools and bash scripting proficiency
  • 3+ years of Python experience. Solid programing experience in Python - needs to be an expert in this 4/5 level.
  • Working knowledge using Python, Spark, Airflow, Hive, Snowflake and AWS services.
  • Exposure to software engineering such as parallel data processing, data flows, REST APIs, JSON, XML, and micro service architectures
  • 1+ year of experience working on Big Data Processing Frameworks and Tools

Nice to have:
  • Prior working experience on data science work bench
  • Knowledge of data engineering aspects within machine learning pipelines (e.g., train/test splitting, scoring process, etc.)

Similar jobs

More searches like this

Similar jobs