<div id="wp-socials-general-btn"></div><div style="clear:both">div></div><p><a title="@radar.oreilly.com - Kyoto Tamura - The log: The lifeblood of your data pipeline" href="http://radar.oreilly.com/2015/04/the-log-the-lifeblood-of-your-data-pipeline.html" target="_blank">BigDataFr recommends: The log: The lifeblood of your data pipeline Why every data pipeline should have a Unified Logging Layer.
« The value of log data for business is unimpeachable. On every level of the organization, the question, “How are we doing?” is answered, ultimately, by log data. Error logs tell developers what went wrong in their applications. User event logs give product managers insights on usage. If the CEO has a question about the next quarter’s revenue forecast, the answer ultimately comes from payment/CRM logs. In this post, I explore the ideal frameworks for collecting and parsing logs.
Apache Kafka Architect Jay Kreps wrote a wonderfully crisp survey on log data. He begins with the simple question of “What is the log?” and elucidates its key role in thinking about data pipelines. Jay’s piece focuses mostly on storing and processing log data. Here, I focus on the steps before storing and processing.