To be clear, President Trump can not personally go out and delete decades worth of information. The census continues to exist, and if you know where to look you can peruse data about demographics and population density to your heart’s content. Instead, the administration seems focused on two avenues of attack: One, make data harder to find, and two, slash funding until collecting data becomes difficult for government agencies.
— maxwell ogden (@denormalize) February 14, 2017
In mid-February the Trump administration scrubbed open.whitehouse.gov of datasets created under the Obama administration. It’s not unusual for an incoming president to do a little housecleaning. But there was something odd about the way the data simply disappeared without fanfare. The National Archives and Records Administration (NARA) made an archive available, as required by law, but there were no clear directions on how to find it. There were also some discrepancies between the file sizes and metadata hosted by the NARA and those pulled by third parties before the data was archived.
Alex Howard, deputy director of the Sunlight Foundation, told Engadget he has only “low to moderate confidence” in the completeness of the NARA archive. To make matters worse, the links pointing to the developer tools on the White House portal were simply broken and the NARA couldn’t guarantee API access would work for all the datasets.
The disclosure section of the White House website will supposedly be home to at least some of the information normally hosted on the open data portal, including visitor logs. But, so far, those areas of the site contain nothing more than a promise that they’ll be updated. The White House failed to respond to repeated requests for a timeline on those updates.
Positions such as CIO and chief digital officer, created under the Obama administration, were responsible for guiding these programs. But right now those roles are empty, and the White House has given no indication it plans to fill them any time soon (if at all). The same is true of more policy-focused positions involved in the government data programs like CTO and chief data scientist. These became increasingly important positions under President Obama, but they don’t appear to be big priorities for Trump.
By ignoring these offices, the new administration has let an entire data infrastructure system atrophy. Former Chief Data Scientist for the White House DJ Patil explains that often these are “datasets that people ask for by FOIA, and that’s an incredibly inefficient use of taxpayer dollars… A lot of people invested a tremendous amount of time to build the systems and infrastructure” to streamline the process of requesting government data. And, “a big chunk of that infrastructure is allowing what all administrations have typically provided, and that is transparency into who is using the White House and other datasets that people have a right to see.”
At this point, according to the Sunlight Foundation, there has only been one confirmed removal of data, and that was of animal welfare records from the USDA. But Howard warns, “The big fears that many people have about takedowns haven’t materialized so far. That doesn’t mean they won’t.”
The same is true of data manipulation. So far, at least, those fears have not quite panned out. In late February the Wall Street Journal reported that the administration was considering changing how it calculated the trade deficit. Essentially the proposal would make the deficit appear larger by ignoring re-exports — goods that are first imported to the US before being exported to a third country. For example, computers imported from China would be counted against the deficit on their way into the country, but the administration would not count any recouped value if they were sold to Mexico.
This sort of misleading math alarmed many career bureaucrats, especially those at the US Trade Representative’s office, which used the new methodology at the administration’s instruction but included stiff objections to be presented alongside the new numbers.