Skip to content

PacktPublishing/Data-Engineering-Best-Practices

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

Data Engineering Best Practices

<Book Name>

This is the code repository for Data Engineering Best Practices, published by Packt.

Architect robust and cost-effective data solutions in the cloud era

What is this book about?

Revolutionize your approach to data processing in the fast-paced business landscape with this essential guide to data engineering. Discover the power of scalable, efficient, and secure data solutions through expert guidance on data engineering principles and techniques. Written by two industry experts with over 60 years of combined experience, it offers deep insights into best practices, architecture, agile processes, and cloud-based pipelines. You’ll start by defining the challenges data engineers face and understand how this agile and future-proof comprehensive data solution architecture addresses them. As you explore the extensive toolkit, mastering the capabilities of various instruments, you’ll gain the knowledge needed for independent research. Covering everything you need, right from data engineering fundamentals, the guide uses real-world examples to illustrate potential solutions. It elevates your skills to architect scalable data systems, implement agile development processes, and design cloud-based data pipelines. The book further equips you with the knowledge to harness serverless computing and microservices to build resilient data applications.

This book covers the following exciting features:

  • Architect scalable data solutions within a well-architected framework
  • Implement agile software development processes tailored to your organization's needs
  • Design cloud-based data pipelines for analytics, machine learning, and AI-ready data products
  • Optimize data engineering capabilities to ensure performance and long-term business value
  • Apply best practices for data security, privacy, and compliance
  • Harness serverless computing and microservices to build resilient, scalable, and trustworthy data pipelines

If you feel this book is for you, get your copy today!

https://www.packtpub.com/

Following is what you need for this book: If you are a data engineer, ETL developer, or big data engineer who wants to master the principles and techniques of data engineering, this book is for you. A basic understanding of data engineering concepts, ETL processes, and big data technologies is expected. This book is also for professionals who want to explore advanced data engineering practices, including scalable data solutions, agile software development, and cloud-based data processing pipelines.

Related products

Get to Know the Authors

Richard J. Schiller is a chief architect, distinguished engineer, and startup entrepreneur with 40 years of experience delivering real-time large-scale data processing systems. He holds an MS in computer engineering from Columbia University’s School of Engineering and Applied Science and a BA in computer science and applied mathematics. He has been involved with two prior successful startups and has coauthored three patents. He is a hands-on systems developer and innovator.

David Larochelle has been involved in data engineering for startups, Fortune 500 companies, and research institutes. He holds a BS in computer science from the College of William & Mary, a Masters in computer science from the University of Virginia, and a Master’s in communication from the University of Pennsylvania. David’s career spans over 20 years, and his strong background has enabled him to work in a wide range of organizations, including startups, established companies, and research labs.

About

Data Engineering Best Practices, published by Packt

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages