Title: Data Engineer
Location: North America
Application Deadline: 2022-08-31
Role: Data Engineer
Status : FTE
Location: Remote [must be based in the United States]
30,000 ft. Overview:
Fully remote [Data Engineering] opportunity to join one of the nation’s largest aggregators of municipal and school district data, promotes municipal wellness and sustainability through its cloud-based data management tools and proprietary performance analytics applications.
Industry leader in municipal and education data management tools and proprietary performance analytics applications, having recently earned a 2021 CODiE award, an EdTech Breakthrough Award, and ongoing GovTech100 recognition for years running.
Leading SaaS Provider | Platform Overview:
Homegrown SaaS Platform allows municipalities and school districts to manage their data and access cost-effective products and advisory services to make meaningful and reliable budgets, financial projections, academic achievement metrics, trend reports and better-informed forward-looking decisions.
An integral partner that takes the burden of report-generating off of municipal and school employees’ hands. Fully customize their platform to meet a user’s exact needs and provide the manpower, the technology and the personalized service needed to get internal information into an online space that is accessible to both the government and the public.
This team works hands-on, one-on-one, with users to set up data, create visual dashboards, and continuously provide reminders of upcoming deadlines or changes in government and educational compliance requirements.
Role | Tech Overview:
Enabling data pipelines to flow like water vs. being thick as molasses! AWS Glue (extracting and loading) AWS Step Function (workflows) & (interaction between data sources) PySpark (Python + Spark Scripting)
• 2-4+ years of hands-on experience using AWS Services, especially AWS Glue.
• Strong hands-on experience in AWS Glue, Step functions, AWS S3, Redshift, Lambda.
• PostgreSQL and MongoDB databases .
• Strong hands-on experience in developing ETL processes with PySpark. Handling complex unstructured data in a structured manner.
• Should have strong knowledge on data warehousing concept
*Interested in learning more or know someone who might? To get the dialogue started, you can simply apply to this post or reach out to me directly w/ your resume attached: [email protected]
Thank you for taking a moment to check this out. We look forward to hearing from you!
Tommy Haugh | Ntroduced.com
All Contents © 1999-2022. The GIS Jobs Clearinghouse