Ready to learn data engineering?
Apply for one of our upcoming classes.
In just 12 weeks, you’ll learn the skills and tools that full-time data engineers need to know to make in impact in the industry. Our curriculum focuses on Big Data technologies like Hadoop, Spark, Kafka, Storm, HBase, Solr, Cloud Computing, and Lambda Architectures. Graduates of this program will be know how to set up efficient and scalable data infrastructure and pipelines that can store, analyze, and extract valuable insights from Big Data.
After graduating from one of our programs, many of our alumni pursue roles as data engineers or data scientists, while others pursue roles in machine learning, software engineering, and data product management. At hiring day, we connect students with top technology companies to help them land their perfect job. Throughout the Galvanize program, students have the opportunity to explore these career paths and engage with professionals working in the field, as well as potential employers.
The curriculum covers the theory and practice of working with data at scale, including key components of the Hadoop ecosystem and newer technologies such as Spark, Storm, Kafka, Docker, and others. Galvanize in partnership with Nvent (a BigData consultancy with experience solving real world Big Data problems for fortune 500 companies) have co-designed the curriculum to make sure it prepares you for real world problems and teaching skills that are used in industry. Participants complete 600+ hours of practical, hands-on, project-based curriculum. Our approach combines established technologies such as Hadoop and query languages like SQL with quickly growing frameworks like Spark and Scala. Each module is built around a common use case sourced from our industry partners, with students building functional prototypes each week leading up to the capstone project.
|Week 1||Advanced Java, Databases and Relational Systems|
|Week 2||Hadoop Ecosystem: MapReduce, HDFS, YARN, Lambda Architecture|
|Week 3||Distributed Systems: Design, Consistency Models, Capacity Planning|
|Week 4||Information Architecture, Advanced SQL, Hive/Pig|
|Week 5||Advanced Abstractions and Data Pipelines: Cascading, Flume, Scoop, Solr|
|Week 6||Security and Special Topics: Data Quality, Management, Tuning & Performance|
|Week 7||Real-time Data and Streaming: Kafka, Storm, Cassandra, Docker|
|Week 8||Algorithms & Machine Learning at Scale: Spark|
|Week 9 - Week 12||Capstone Projects, Hiring Day, and Onsite Interviews|
Want to work in data engineering after graduation? Nvent is looking for new team members for big data consultancy. Students interested in working at Nvent can apply for one of eight Nvent scholarships and receive a full scholarship for the Galvanize Data Engineering course.
To be considered for the Nvent scholarship, click on Nvent checkbox during the application process.
At our campuses, diversity and collaboration are the norm: developers, data scientists, and community members learn from each other and work collaboratively. With expert instructors, startups, and industry partners working side by side, there’s always someone to help you get unstuck, offer you new challenges, or provide a key introduction.
At Galvanize, you learn data engineering by doing data engineering. Our curriculum is designed around solving practical, real-world problems with relevant data sets. After completing your 2-week capstone project, you’ll work with our outcomes team to receive interview coaching and practice, resume review, and get introductions to partner companies to ensure you put your best foot forward after graduation. When the program ends, you’ll present your project and interview with 30+ companies at Hiring Day.