For many organizations, Hadoop is the first step for dealing with massive amounts of data. The next step? Processing and analyzing datasets with the Apache Pig scripting platform. With Pig, you can batch-process data without having to create a full-fledged application, making it easy to experiment with new datasets. Updated with use cases and programming examples, this second edition is the ideal learning tool for new and experienced users alike. You’ll find comprehensive coverage on key features such as the Pig Latin scripting language and the Grunt shell. When you need to analyze terabytes of data, this book shows you how to do it efficiently with Pig.
Another totally readable introduction to something new, without a full StackOverflow safety net yet. (Pig is very good, like an imperative, Pythonic SQL: an omnivorous abstraction over MapReduce with Pythonic data structures, optional Java typing, optional schema declaration, fully extensible in Java, Python, etc. Pig is not Turing-complete, but offers several no-fuss ways to extend and delegate, including this beam of sunlight. I'm porting a bunch of SAS and MapReduce code into Pig Latin atm; the job can sometimes be done in 10 times fewer lines.) However, I read this in the slightly dazed and impermeable way that I read anything I am to read for work.
A good programming book exposes me to many of the fundamentals of the language. A great programming book has me feeling ready to go do something in that language. This was a good programming book and nothing more.
A nice introduction into Pig, a dataflow scripting language for Hadoop. Since Pig is being actively developed, some information is already outdated, but it gives enough overview about what Pig is about.