What’s happening this week at the intersection of management and technology.

The paradox of automation: Earlier this year, Facebook exorcised those pesky human editors who were introducing political bias into its Trending news list and left the job to algorithms. Now, reports Caitlin Dewey in The Washington Post, the Trending news isn’t biased, but some of it is fake. Turns out the algorithms can’t tell a real news story from a hoax.

Facebook says it can improve its algorithms, but errors of judgment aren’t the only pitfall in transferring human tasks to machines. There’s also the paradox of automation. “It applies in a wide variety of contexts, from the operators of nuclear power stations to the crews of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators,” says Tim Hartford in an excerpt published by The Guardian from his new book, Messy: The Power of Disorder to Transform Our Lives. “The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face.”

Hartford borrows William Langewiesche’s harrowing description of the crash of Air France Flight 447 to illustrate three problems with automation: “First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. … Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skillful response.”

The excerpt is worth a read — especially if it prompts you to ask if your company’s automation initiatives might entail similar risks.

AI stimulus from the U.S. government: In August, this column included a link to IBM’s response to a call from the White House for information on artificial intelligence. Last week, the White House issued its own AI report and a strategic framework for developing a national AI capability. And U.S. commander in chief Barack Obama discussed it, along with MIT Media Lab director (in chief) Joi Ito, in an extensive interview with Wired editor in chief Scott Dadich.

There’s news and possibly opportunities in all this for companies that are pursuing AI-related initiatives. For instance, the reports and the president say that the United States is going to make long-term investments in AI research and development. They are encouraging the lowering of regulatory hurdles by asking agencies like the FAA and DOT to respond quickly and with a “light touch” to AI innovation. They are opening up federal databases to AI initiatives and trying to establish open data standards. And they plan an educational push — not only to ensure companies have access to skilled workers, but also to help workers whose jobs are going to disappear through the transition. It sounds like the executive branch is getting its program together. Of course, there’s no telling whether it will survive beyond Jan. 20, 2017.

Coding the letter of the law: Since long reads seem to be on the menu this week, I might as well pile it on with Stephen Wolfram’s engagingly titled “Computational Law, Symbolic Discourse and the AI Constitution.” It’s a 12,000-word blog post by the guy who became the youngest recipient of a MacArthur Foundation “genius grant,” at age 21, and used it as a springboard to develop a computer language capable of expressing everything we humans do.

Wolfram’s massive missive is about formalizing human law — as in contracts — using a computational language. “I think it’s a really important thing to do,” he writes, “not just because it’ll enable all sorts of new societal opportunities and structures, but because I think it’s likely to be critical to the future of our civilization in its interaction with artificial intelligence.”

Anybody who has ever signed a contract knows that it’s only as good as the people behind it. No matter how hard you try to create an iron-clad contract, there are always as many interpretations and loopholes as the U.S. tax code. That’s because human language depends on the meaning of words — and the meaning of words depends on the people hearing the words.

This is not the case with computer languages, which are precise. “Instead of having a vague, societally defined effect on human brains,” explains Wolfram, “they’re defined to have a very specific effect on a computer.”

Wolfram says he has created a bridge between the human and machine language — his own Wolfram Language. The bridge isn’t fully built out, but he’s working on that. If he’s successful, it’s possible that you could write a crystal clear and thus easily enforceable contract. That’s only one implication of Wolfram’s work, but it’s a pretty amazing prospect in and of itself — or, if you’re a lawyer, a nightmarish one.