Judge Peck does it Again! 2013 Proclaimed the “Year of Information Governance”
In terms of making headlines, LegalTech 2013 did not disappoint. Last year’s LegalTech followed quickly on the heels of Judge Peck’s groundbreaking words in Da Silva Moore, where he stated that computer-assisted review in eDiscovery is “acceptable in appropriate cases.” While his quote and the Da Silva Moore case (including the now moot recusal shenanigans) have faded from the front pages, Judge Peck used the podium at LTNY 2013 to call his next shot:
“If 2012 was the year of predictive coding or technology-assisted review, 2013 or ’14 seems to be information governance. …[I]t really would be helpful if systems were in place to get rid of the junk. Get rid of the ‘what time are we going to lunch’ emails that nobody bothers to delete,’ because that would help reduce the effort and cost of discovery whenever it’s needed,” he noted.
Given Judge Peck’s place in the pantheon of legal technologists, practitioners and lay people alike should certainly take note of his recent prognostication. It’s interesting to note that while he focused generally on information governance, he specifically honed in on the defensible deletion use case and its downstream impact on eDiscovery costs.
This defensible deletion topic was addressed at length in a panel that I was on with Bennett Borden, John Rosenthal and AIG’s Dir. of eDiscovery Clifton Dutton (dutifully moderated by Bill Tolson). During this session we uncovered a number of myths about defensible deletion that seem to be holding back more mainstream adoption.
Myth 1. There aren’t existing software applications that facilitate Defensible Deletion
Even Judge Peck seems to possibly fall for this first myth. He states that he wishes tools were “in place” to “get rid of the junk.” His comments could either mean that such tools don’t exist or that if they do exist they’re not universally deployed. To the first point (i.e., that there aren’t such tools around), there are a number of tools that help facilitate defensible disposition. And, many of those applications piggyback on the same indexing and content analysis functionality that’s seen in leading eDiscovery tools.
For example, the Department of Energy is currently using an application to auto-classify content into large buckets of information that are retained for set periods or defensibly deleted. The application:
“[A]ctively crawls and indexes content and metadata from emails, tagging them by categories that match the EERE General Records Schedule” and then the “software automatically categorizes new documents based on a training set using positive and negative sample documents.”
Given the presence of enterprise grade software applications for defensible deletion, the lack of adoption is more of a “will” issue than a “skill” issue. Getting the right level of executive buy-in and building a cross departmental team are often the rate limiting factors versus the existence of tools to accomplish information governance.
Myth 2. Data Volume is the Issue
By now everyone’s seen hundreds of data volume charts that reveal that information (particularly unstructured data) is doubling roughly every 18 months – moving us quickly beyond the terabyte realm to the petabyte age. And, given that the first V of Big Data is Volume, it’s tempting to think that the raison d’etre for defensible deletion is attacking surging volumes. Fortunately, Bennett Border (in our session) set the record straight.
To paraphrase Bennett, “data volume is an issue, but it isn’t the issue – visibility is.” His point was that even large volumes are manageable particularly if an organization has insight into what the data contains, which then drives the keep/delete decision making process. Without visibility into the data or an understanding of where data resides, even terabyte levels of electronically stored information (ESI) can be problematic.
Myth 3. There’s No Precedent for Defensible Deletion
During the panel, John Rosenthal noted that compared to predictive coding (for example) there wasn’t nearly the same level of judicial signoff on information governance programs like defensible deletion. And, while there may not be the same level of furor currently among jurists, there certainly is support for the reasonable and defensible disposition of data.
Initially, FRCP 37(e) provides a “safe harbor” designed to protect organizations from sanctions when their computer systems automatically destroy or delete email, archival data, and other ESI. It states that ”[a]bsent exceptional circumstances, a court may not impose sanctions under [the Federal Rules of Civil Procedure] on a party for failing to provide electronically stored information lost as a result of the routine, good-faith operation of an electronic information system.”
This safe harbor has been used in a number of cases, including the recent one of Viramontes v. U.S. Bancorp. In Viramontes, the defendant bank defeated a sanctions motion due to the effective procedures that enabled its defensible deletion strategy. The bank implemented a retention policy that kept emails for 90 days, after which the emails were overwritten and destroyed, except when they needed to be preserved via a litigation hold process. This “neutral” defensible deletion policy, combined with 37(e), was enough to save the day and prevent spoliation sanctions.
“The Court also finds there is no evidence that the emails were destroyed in bad faith or, put another way, that the destruction was done by U.S. Bank for the purpose of hiding unfavorable information. The record shows that the emails were destroyed in a routine manner pursuant to a neutral policy.”
It’s great to finally hear jurists like Judge Peck call for defensible deletion. But, the above myths can be an unnecessary obstruction to organizations finally cleaning house to avoid the type of “infogluttony” that creates painful downstream eDiscovery costs.
Let’s hope 2013 really is the year of true information governance as predicted…