Skip to main content

TR Memescape

  • TR Nation

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Testy Calibrate

Politics and Current Events / liberty U: whoah
Spoiler (click to show/hide)
super long article so I am splitting into 2 posts but interesting
Philosophy / Ethics of technology
I am reading this article:
and come across this tidbit:
5. Most tech education doesn't include ethical training.

In mature disciplines like law or medicine, we often see centuries of learning incorporated into the professional curriculum, with explicit requirements for ethical education. Now, that hardly stops ethical transgressions from happening--we can see deeply unethical people in positions of power today who went to top business schools that proudly tout their vaunted ethics programs. But that basic level of familiarity with ethical concerns gives those fields a broad fluency in the concepts of ethics so they can have informed conversations. And more importantly, it ensures that those who want to do the right thing and do their jobs in an ethical way have a firm foundation to build on.

But until the very recent backlash against some of the worst excesses of the tech world, there had been little progress in increasing the expectation of ethical education being incorporated into technical training. There are still very few programs aimed at upgrading the ethical knowledge of those who are already in the workforce; continuing education is largely focused on acquiring new technical skills rather than social ones. There's no silver-bullet solution to this issue; it's overly simplistic to think that simply bringing computer scientists into closer collaboration with liberal arts majors will significantly address these ethics concerns. But it is clear that technologists will have to rapidly become fluent in ethical concerns if they want to continue to have the widespread public support that they currently enjoy.

which kind of woke me to something I haven't thought deeply about.

Technology is media theory and social theory rolled into a weird Gordian knot. Successful technology alters human systems and our global communications landscape assures that successful technologies will shape the complex, dynamic, chaotic, and fundamentally nonlinear systems of human behavior and activity.  Relying on principles like 'pure free speech' or really any libertarian oriented principles as axiomatic will bring about the best and the worst outcomes of thought experiments regarding the extensions of those principles plus countless unanticipated consequences. Has ethics entered a new age of systems? How could we possibly educate about ethics on principles that place human welfare as results of a principle rather than as the intended cause of a policy? So, rather than saying freedom will produce prosperity for all, it is now imperative to be intentional about, for example, redistribution because systemically capital always concentrates and eventually r>g (as Piketty puts it) creates aristocracies and games of thrones. But that's just an economic example. We know that we cannot manage any chaotic system for sustained yield because externalitities will build up and feedback will collapse the system. i.e. series of collapse and bubbles in biosphere, politics, economies, etc will become the norm.

So, what does it even mean to educate tech workers in ethics?
For decades, hand dryers have been presented as an environmentally and hygienically friendly way to remove water and bacteria after washing hands with soap. But while it seems like a good idea in theory, hand dryers may actually increase the spread of bacteria on skin and clothing. Previous studies have come to similar conclusions, but the latest research may be enough to give even the most ardent hand dryer supporters reasons to avoid them.

Via Boing Boing, researchers at the University of Connecticut published a study which confirmed hand dryers draw in "potentially infectious microbes" and spread them when activated. Even the low-powered hand dryers used in the study were prone to gathering fecal material and bacteria from from the air and blasting them on unsuspecting users.

How does it happen? Even the cleanest public restrooms are rarely hygienic environments. But the biggest issue comes from flushing toilets without lids. The flushes often send fecal material in the air, which are subsequently sucked in and pushed out by the active hand dryers. It should be noted hand dryers with HEPA filters can cut down on the intake of harmful particles and other unwanted objects in the air. But it doesn't fully eliminate them.

Unfortunately, moist hands and skin are an ideal environment for bacteria to thrive, especially if the users don't realize what's happening. Many public restrooms don't even offer paper towels as an alternative, leaving the hand dryers the only option. It's almost enough to make us start carrying our own towels around.
Dear Sundar,

We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.

Google is implementing Project Maven, a customized AI surveillance engine that uses "wide area motion imagery" data captured by US government drones to detect vehicles and other objects, track their motions and provide results to the Department of Defense.

Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not "operate or fly drones" and "will not be used to launch weapons". While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it's delivered it could easily be used to assist in these tasks. This plan will irreparably damage Google's brand and its ability to compete for talent.

    We request that you cancel this project immediately

Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public's trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn't make this any less risky for Google. Google's unique history, its motto "don't be evil", and its direct reach into the lives of billions of users set it apart.

We cannot outsource the moral responsibility of our technologies to third parties. Google's stated values make this clear: every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google's reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US government in military surveillance - and potentially lethal outcomes - is not acceptable.

Recognizing Google's moral and ethical responsibility, and the threat to Google's reputation, we request that you: 1. Cancel this project immediately. 2. Draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.

    This open letter was originally published in the New York Times
Politics and Current Events / is twitter toxic?
Twitter is not alone in wrestling with the fact that its product is being corrupted for malevolence: Facebook and Google have come under heightened scrutiny since the presidential election, as more information comes to light revealing how their platforms manipulate citizens, from Cambridge Analytica to conspiracy videos. The companies' responses have been timid, reactive, or worse. "All of them are guilty of waiting too long to address the current problem, and all of them have a long way to go," says Jonathon Morgan, founder of Data for Democracy, a team of technologists and data experts who tackle governmental social-impact projects.

The stakes are particularly high for Twitter, given that enabling breaking news and global discourse is key to both its user appeal and business model. Its challenges, increasingly, are the world's.

How did Twitter get into this mess? Why is it only now addressing the malfeasance that has dogged the platform for years? "Safety got away from Twitter," says a former VP at the company. "It was Pandora's box. Once it's opened, how do you put it all back in again?"

In Twitter's early days, as the microblogging platform's founders were figuring out its purpose, its users showed them Twitter's power for good. Galvanized by global social movements, dissidents, activists, and whistle-blowers embracing Twitter, free expression became the startup's guiding principle. "Let the tweets flow," said Alex Macgillivray, Twitter's first general counsel, who later served as deputy CTO in the Obama administration. Internally, Twitter thought of itself as "the free-speech wing of the free-speech party."

This ideology proved naive. "Twitter became so convinced of the virtue of its commitment to free speech that the leadership utterly misunderstood how it was being hijacked and weaponized," says a former executive.

The first sign of trouble was spam. Child pornography, phishing attacks, and bots flooded the tweetstream. Twitter, at the time, seemed to be distracted by other challenges. When the company appointed Dick Costolo as CEO in October 2010, he was trying to fix Twitter's underlying infrastructure-the company had become synonymous with its "fail whale" server-error page, which exemplified its weak engineering foundation. Though Twitter was rocketing toward 100 million users during 2011, its antispam team included just four dedicated engineers. "Spam was incredibly embarrassing, and they built these stupidly bare-minimum tools to [fight it]," says a former senior engineer, who remembers "goddamn bot wars erupting" as fake accounts fought each other for clicks.

"You can't take credit for the Arab Spring without taking responsibility for Donald Trump," says Leslie Miley, a former engineering safety manager at Twitter. [Photo illustration: Delcan & Company]
Twitter's trust and safety group, responsible for safeguarding users, was run by Del Harvey, Twitter employee No. 25. She had an atypical résumé for Silicon Valley: Harvey had previously worked with Perverted Justice, a controversial volunteer group that used web chat rooms to ferret out apparent sexual predators, and partnered with NBC's To Catch a Predator, posing as a minor to lure in pedophiles for arrest on TV. Her lack of traditional technical and policy experience made her a polarizing figure within the organization, although allies have found her passion about safety issues inspiring. In the early days, "she personally responded to individual [affected] users-Del worked tirelessly," says Macgillivray. "[She] took on some of the most complex issues that Twitter faced. We didn't get everything right, but Del's leadership was very often a factor when we did."

Harvey's view, championed by Macgillivray and other executives, was that bad speech could ultimately be defeated with more speech, a belief that echoed Supreme Court Justice Louis Brandeis's 1927 landmark First Amendment decision that this remedy is always preferable to "enforced silence." Harvey occasionally used as an example the phrase "Yo bitch," which bad actors intend as invective, but others perceive as a sassy hello. Who was Twitter to decide? The marketplace of ideas would figure it out.

By 2012, spam was mutating into destructive trolling and hate speech. The few engineers in Harvey's group had built some internal tools to enable her team to more quickly remove illegal content such as child pornography, but they weren't prepared for the proliferation of harassment on Twitter. "Every time you build a wall, someone is going to build a higher ladder, and there are always more people outside trying to fuck you over than there are inside trying to stop them," says a former platform engineer. That year, Australian TV personality Charlotte Dawson was subjected to a rash of vicious tweets-e.g., "go hang yourself"-after she spoke out against online abuse. Dawson attempted suicide and was hospitalized. The following summer, in the U.K., after activist Caroline Criado-Perez campaigned to get a woman's image featured on the 10-pound note, her Twitter feed was deluged with trolls sending her 50 rape threats per hour.

The company responded by creating a dedicated button for reporting abuse within tweets, yet trolls only grew stronger on the platform. Internally, Costolo complained that the "abuse economics" were "backward." It took just seconds to create an account to harass someone, but reporting that abuse required filling out a time-consuming form. Harvey's team, earnest about reviewing the context of each reported tweet but lacking a large enough support staff, moved slowly. Multiple sources say it wasn't uncommon for her group to take months to respond to backlogged abuse tickets. Because they lacked the necessary language support, team members had to rely on Google Translate for answering many non-English complaints. User support agents, who manually evaluated flagged tweets, were so overwhelmed by tickets that if banned users appealed a suspension, they would sometimes simply release the offenders back onto the platform. "They were drowning," says a source who worked closely with Harvey. "To this day, it's shocking to me how bad Twitter was at safety."

Twitter's leadership, meanwhile, was focused on preparing for the company's November 2013 IPO, and as a result it devoted the bulk of its engineering resources to the team overseeing user growth, which was key to Twitter's pitch to Wall Street. Harvey didn't have the technical support she needed to build scalable solutions to Twitter's woes.

Toxicity on the platform intensified during this time, especially in international markets. Trolls organized to spread misogynist messages in India and anti-Semitic ones in Europe. In Latin America, bots began infecting elections. Hundreds used during Brazil's 2014 presidential race spread propaganda, leading a company executive to meet with government officials, during which, according to a source, "pretty much every member of the Brazilian house and senate asked, 'What are you doing about bots?'" (Around this time, Russia reportedly began testing bots of its own to sway public opinion through disinformation. Twitter largely tolerated automated accounts on the platform; a knowledgeable source recalls the company once sending a cease-and-desist letter to a bot farmer, which was disregarded, a symbol of its anemic response to this issue.) Twitter's leadership seemed deaf to cries from overseas offices. "It was such a Bay Area company," says a former international employee, echoing a common grievance that Twitter fell victim to Silicon Valley myopia. "Whenever [an incident] happened in the U.S., it was a company-wide tragedy. We would be like, 'But this happens to us every day!"

It wasn't until mid-2014, around the time that trolls forced comedian Robin Williams's daughter, Zelda, off the service in the wake of her father's suicide-she later returned-that Costolo had finally had enough. Costolo, who had been the victim of abuse in his own feed, lost faith in Harvey, multiple sources say. He put a different department in charge of responding to user-submitted abuse tickets, though he left Harvey in charge of setting the company's trust and safety guidelines.
more at link
     by Antonio Regalado April 2, 2018

Ready for a world in which a $50 DNA test can predict your odds of earning a PhD or forecast which toddler gets into a selective preschool?

Robert Plomin, a behavioral geneticist, says that's exactly what's coming.

For decades genetic researchers have sought the hereditary factors behind intelligence, with little luck. But now gene studies have finally gotten big enough--and hence powerful enough--to zero in on genetic differences linked to IQ.

A year ago, no gene had ever been tied to performance on an IQ test. Since then, more than 500 have, thanks to gene studies involving more than 200,000 test takers. Results from an experiment correlating one million people's DNA with their academic success are due at any time.

The discoveries mean we can now read the DNA of a young child and get a notion of how intelligent he or she will be, says Plomin, an American based at King's College London, where he leads a long-term study of 13,000 pairs of British twins.

Plomin outlined the DNA IQ test scenario in January in a paper titled "The New Genetics of Intelligence," making a case that parents will use direct-to-consumer tests to predict kids' mental abilities and make schooling choices, a concept he calls precision education.

As of now, the predictions are not highly accurate. The DNA variations that have been linked to test scores explain less than 10 percent of the intelligence differences between the people of European ancestry who've been studied.

Even so, MIT Technology Review found that aspects of Plomin's testing scenario are already happening. At least three online services, including GenePlaza and DNA Land, have started offering to quantify anyone's genetic IQ from a spit sample.

Others are holding back. The largest company offering direct-to-consumer DNA health reports, 23andMe, says it's not telling people their brain rating out of concern the information would be poorly received.

Several educators contacted by MIT Technology Review reacted with alarm to the new developments, saying DNA tests should not be used to evaluate children's academic prospects.

"The idea is we'll have this information everywhere you go, like an RFID tag. Everyone will know who you are, what you are about. To me that is really scary," says Catherine Bliss, a sociologist at the University of California, San Francisco, and author of a book questioning the use of genetics in social science.

"A world where people are slotted according to their inborn ability--well, that is Gattaca," says Bliss. "That is eugenics."
-more at link

my bolding. That seems about as good as this particular test will ever be able to get. Not least because a rigorous definition of IQ is unlikely to happen in our lifetime and MENSA is notoriously a haven for less than the most capable among us.
President Trump's Education Department and its inspector general, as well as lawmakers and think tanks of all ideological stripes, have raised concerns about the growing cost of the federal government's student loan programs -- specifically its loan forgiveness options for graduate students. Members of both chambers of Congress have said they are committed to passing new higher education legislation this year that will include changes to these programs.1

The costs of the suite of plans currently offered by the government to lessen the burden of grad school debt has ballooned faster than anticipated, and the federal government stands to lose bundles of money. A new audit from the Department of Education's inspector general found that between fiscal years 2011 and 2015, the cost of programs that allow student borrowers to repay their federal loans at a rate proportional to their income shot up from $1.4 billion to $11.5 billion. Back in 2007, when many such programs launched, the Congressional Budget Office projected they would cost just $4 billion over the 10 years ending in 2017.

The cost of the loan forgiveness programs exploded, in part, because policymakers did not correctly estimate the number of students who would take advantage of such programs, according to higher education scholar Jason Delisle. Now there's an emerging consensus that some programs should be reined in, but ideas on how much and in what ways vary by party affiliation. Senate Democrats just introduced a college affordability bill that focuses on creating "debt-free" college plans by giving federal matching funds to states that, in turn, would figure out ways to help students pay for school. In the past, President Barack Obama acknowledged the need to require borrowers to repay more of their debts and made some proposals for modifying the programs' rules. The GOP goes much further in its suggestions: A new proposal from House Republicans would eliminate some loan-forgiveness programs entirely.

The federal government currently offers several types of loans, with varying repayment terms, one of which can cover up to the full cost of a student's graduate program. If, after they leave school, a borrower signs up for an income-driven repayment plan, they will pay back their loan at the rate of 10 percent of their discretionary income2 each year, and the remaining balance will be forgiven after 20 years.

Under the Public Service Loan Forgiveness Program, however, a student's debt can be forgiven after just 10 years. The program was created to ease economic barriers to entering public service, which is defined as work for any federal, state, local or tribal agency, or any tax-exempt nonprofit.3

Right now, a Georgetown Law grad who's gunning for a job at a U.S. attorney's office and enrolled in the Public Service Loan Forgiveness Program would expect that the federal student loans she took out to help pay her $180,000 tuition will be forgiven after 10 years. If, like the typical lawyer, she graduates with $140,000 in federal student loan debt and her salary rises from $59,000 to $121,000 a year over her first 10 years on the job, she could have the government wipe out $147,000 in debt -- the full remaining principal of her debt plus interest -- according to a 2014 study from the think tank New America, which Delisle co-authored.

Or let's say a second-grade teacher with a master's degree and $42,000 in federal student loan debt (a typical amount for a first-year teacher after undergraduate and graduate school) earns in the 75th percentile for his age for 10 years. If he dutifully fulfills all the requirements for a federal debt forgiveness program -- including completing all of the onerous paperwork -- he, for now, stands to have about $33,000 of that debt forgiven, according to the New America report.

more at the link. but I can't help thinking they are looking through the wrong end of the telescope.
From the way you move and sleep, to how you interact with people around you, depression changes just about everything. It is even noticeable in the way you speak and express yourself in writing. Sometimes this "language of depression" can have a powerful effect on others. Just consider the impact of the poetry and song lyrics of Sylvia Plath and Kurt Cobain, who both killed themselves after suffering from depression.

Scientists have long tried to pin down the exact relationship between depression and language, and technology is helping us get closer to a full picture. Our new study, published in Clinical Psychological Science, has now unveiled a class of words that can help accurately predict whether someone is suffering from depression.

Traditionally, linguistic analyses in this field have been carried out by researchers reading and taking notes. Nowadays, computerised text analysis methods allow the processing of extremely large data banks in minutes. This can help spot linguistic features which humans may miss, calculating the percentage prevalence of words and classes of words, lexical diversity, average sentence length, grammatical patterns and many other metrics.

So far, personal essays and diary entries by depressed people have been useful, as has the work of well-known artists such as Cobain and Plath. For the spoken word, snippets of natural language of people with depression have also provided insight. Taken together, the findings from such research reveal clear and consistent differences in language between those with and without symptoms of depression.
continued at link
Science / Intelligence predicted through DNA?

Since Intelligence cannot even be defined in a rigorous manner and definitions such as there are are highly contentious, I have to assume this is sensationalism. But...
Science / Physiology study design?
This sort of blew my mind in that kind of WTF? way that the serious papers that Alan Sokal was imitating can. And the authors include two Sokal's which makes it even weirder.
Is NIH/NCBI just publishing whatever people send? Is there a reason this review is possible to conduct? April fools?  the questions are endless. It is cited in several other papers. You can see them:
Earthing: Health Implications of Reconnecting the Human Body to the Earth's Surface Electrons
Gaétan Chevalier, 1, 2 , * Stephen T. Sinatra, 3 James L. Oschman, 4 Karol Sokal, 5 and Pawel Sokal 6
Author information ► Article notes ► Copyright and License information ►
This article has been cited by other articles in PMC.
Go to:

Environmental medicine generally addresses environmental factors with a negative impact on human health. However, emerging scientific research has revealed a surprisingly positive and overlooked environmental factor on health: direct physical contact with the vast supply of electrons on the surface of the Earth. Modern lifestyle separates humans from such contact. The research suggests that this disconnect may be a major contributor to physiological dysfunction and unwellness. Reconnection with the Earth's electrons has been found to promote intriguing physiological changes and subjective reports of well-being. Earthing (or grounding) refers to the discovery of benefits--including better sleep and reduced pain--from walking barefoot outside or sitting, working, or sleeping indoors connected to conductive systems that transfer the Earth's electrons from the ground into the body. This paper reviews the earthing research and the potential of earthing as a simple and easily accessed global modality of significant clinical importance.
1. Introduction

Environmental medicine focuses on interactions between human health and the environment, including factors such as compromised air and water and toxic chemicals, and how they cause or mediate disease. Omnipresent throughout the environment is a surprisingly beneficial, yet overlooked global resource for health maintenance, disease prevention, and clinical therapy: the surface of the Earth itself. It is an established, though not widely appreciated fact, that the Earth's surface possesses a limitless and continuously renewed supply of free or mobile electrons. The surface of the planet is electrically conductive (except in limited ultradry areas such as deserts), and its negative potential is maintained (i.e., its electron supply replenished) by the global atmospheric electrical circuit [1, 2].

Mounting evidence suggests that the Earth's negative potential can create a stable internal bioelectrical environment for the normal functioning of all body systems. Moreover, oscillations of the intensity of the Earth's potential may be important for setting the biological clocks regulating diurnal body rhythms, such as cortisol secretion [3].

It is also well established that electrons from antioxidant molecules neutralize reactive oxygen species (ROS, or in popular terms, free radicals) involved in the body's immune and inflammatory responses. The National Library of Medicine's online resource PubMed lists 7021 studies and 522 review articles from a search of "antioxidant + electron + free radical" [3]. It is assumed that the influx of free electrons absorbed into the body through direct contact with the Earth likely neutralize ROS and thereby reduce acute and chronic inflammation [4]. Throughout history, humans mostly walked barefoot or with footwear made of animal skins. They slept on the ground or on skins. Through direct contact or through perspiration-moistened animal skins used as footwear or sleeping mats, the ground's abundant free electrons were able to enter the body, which is electrically conductive [5]. Through this mechanism, every part of the body could equilibrate with the electrical potential of the Earth, thereby stabilizing the electrical environment of all organs, tissues, and cells.

Modern lifestyle has increasingly separated humans from the primordial flow of Earth's electrons. For example, since the 1960s, we have increasingly worn insulating rubber or plastic soled shoes, instead of the traditional leather fashioned from hides. Rossi has lamented that the use of insulating materials in post-World War II shoes has separated us from the Earth's energy field [6]. Obviously, we no longer sleep on the ground as we did in times past.

During recent decades, chronic illness, immune disorders, and inflammatory diseases have increased dramatically, and some researchers have cited environmental factors as the cause [7]. However, the possibility of modern disconnection with the Earth's surface as a cause has not been considered. Much of the research reviewed in this paper points in that direction.

In the late 19th century, a back-to-nature movement in Germany claimed many health benefits from being barefoot outdoors, even in cold weather [8]. In the 1920s, White, a medical doctor, investigated the practice of sleeping grounded after being informed by some individuals that they could not sleep properly "unless they were on the ground or connected to the ground in some way," such as with copper wires attached to grounded-to-Earth water, gas, or radiator pipes. He reported improved sleeping using these techniques [9]. However, these ideas never caught on in mainstream society.

At the end of the last century, experiments initiated independently by Ober in the USA [10] and K. Sokal and P. Sokal [11] in Poland revealed distinct physiological and health benefits with the use of conductive bed pads, mats, EKG- and TENS-type electrode patches, and plates connected indoors to the Earth outside. Ober, a retired cable television executive, found a similarity between the human body (a bioelectrical, signal-transmitting organism) and the cable used to transmit cable television signals. When cables are "grounded" to the Earth, interference is virtually eliminated from the signal. Furthermore, all electrical systems are stabilized by grounding them to the Earth. K. Sokal and P. Sokal, meanwhile, discovered that grounding the human body represents a "universal regulating factor in Nature" that strongly influences bioelectrical, bioenergetic, and biochemical processes and appears to offer a significant modulating effect on chronic illnesses encountered daily in their clinical practices.

Earthing (also known as grounding) refers to contact with the Earth's surface electrons by walking barefoot outside or sitting, working, or sleeping indoors connected to conductive systems, some of them patented, that transfer the energy from the ground into the body. Emerging scientific research supports the concept that the Earth's electrons induce multiple physiological changes of clinical significance, including reduced pain, better sleep, a shift from sympathetic to parasympathetic tone in the autonomic nervous system (ANS), and a blood-thinning effect. The research, along with many anecdotal reports, is presented in a new book entitled Earthing [12].
And, just for fun:
2. Review of Earthing Papers

The studies summarized below involve indoor-testing methods under controlled conditions that simulate being barefoot outdoors.

Has anyone ever heard of this? It seems like an unlikely candidate for the NIH to promote.
Science / humanzees
It is a bit of a stretch, but by no means impossible or even unlikely that a hybrid or a chimera combining a human being and a chimpanzee could be produced in a laboratory. After all, human and chimp (or bonobo) share, by most estimates, roughly 99 percent of their nuclear DNA. Granted this 1 percent difference presumably involves some key alleles, the new gene-editing tool CRISPR offers the prospect (for some, the nightmare) of adding and deleting targeted genes as desired. As a result, it is not unreasonable to foresee the possibility--eventually, perhaps, the likelihood--of producing "humanzees" or "chimphumans." Such an individual would not be an exact equal-parts-of-each combination, but would be neither human nor chimp: rather, something in between.

If that prospect isn't shocking enough, here is an even more controversial suggestion: Doing so would be a terrific idea.

The year 2018 is the bicentennial of Mary Shelley's Frankenstein, subtitled the modern Prometheus. Haven't we learned that Promethean hubris leads only to disaster, as did the efforts of the fictional Dr. Frankenstein? But there are also other disasters, currently ongoing, such as the grotesque abuse of nonhuman animals, facilitated by what might well be the most hurtful theologically-driven myth of all times: that human beings are discontinuous from the rest of the natural world, since we were specially created and endowed with souls, whereas "they"--all other creatures--were not.
Book cover of Frankenstein.Bernie Wrightson

Of course, all that we know of evolution (and by now, it's a lot) demands otherwise, since evolution's most fundamental take-home message is continuity. And it is in fact because of continuity--especially those shared genes--that humanzees or chimphumans could likely be produced. Moreover, I propose that the fundamental take-home message of such creation would be to drive a stake into the heart of that destructive disinformation campaign of discontinuity, of human hegemony over all other living things. There is an immense pile of evidence already demonstrating continuity, including but not limited to physiology, genetics, anatomy, embryology, and paleontology, but it is almost impossible to imagine how the most die-hard advocate of humans having a discontinuously unique biological status could continue to maintain this position if confronted with a real, functioning, human-chimp combination.1

It is also possible, however, that my suggestion is doubly fanciful, not only with respect to its biological feasibility, but also whether such a "creation" would have the impact that I propose--and hope. Thus, chimpanzees are widely known to be very similar to human beings: They make and use tools, engage in complex social behavior (including elaborate communication and long-lasting mother-offspring bonds), they laugh, grieve, and affirmatively reconcile after conflicts. They even look like us. Although such recognition has contributed to outrage about abusing chimps--as well as other primates in particular--in circus acts, laboratory experiments, and so forth, it has not generated notable resistance to hunting, imprisoning and eating other animal species, which, along with chimps themselves, are still considered by most people to be "other" and not aspects of "ourselves." (Chimps, moreover, are enthusiastically consumed in parts of equatorial Africa, where they are a prized component of "bush meat.")

   It is at least arguable that the ultimate benefit of teaching human beings their true nature would be worth the sacrifice paid by a few unfortunates.

story continues.

Shades of Omelas
General Discussion / Love your washing machine?

kind of a crazy article. Stuff we don't think about.
An elite Russian hacking team, a historic ransomware attack, an espionage group in the Middle East, and countless small time cryptojackers all have one thing in common. Though their methods and objectives vary, they all lean on leaked NSA hacking tool EternalBlue to infiltrate target computers and spread malware across networks.

Leaked to the public not quite a year ago, EternalBlue has joined a long line of reliable hacker favorites. The Conficker Windows worm infected millions of computers in 2008, and the Welchia remote code execution worm wreaked havoc 2003. EternalBlue is certainly continuing that tradition--and by all indications it's not going anywhere. If anything, security analysts only see use of the exploit diversifying as attackers develop new, clever applications, or simply discover how easy it is to deploy.

"When you take something that's weaponized and a fully developed concept and make it publicly available you're going to have that level of uptake," says Adam Meyers, vice president of intelligence at the security firm CrowdStrike. "A year later there are still organizations that are getting hit by EternalBlue--still organizations that haven't patched it."
The One That Got Away

EternalBlue is the name of both a software vulnerability in Microsoft's Windows operating system and an exploit the National Security Agency developed to weaponize the bug. In April 2017, the exploit leaked to the public, part of the fifth release of alleged NSA tools by the still mysterious group known as the Shadow Brokers. Unsurprisingly, the agency has never confirmed that it created EternalBlue, or anything else in the Shadow Brokers releases, but numerous reports corroborate its origin--and even Microsoft has publicly attributed its existence to the NSA.

The tool exploits a vulnerability in the Windows Server Message Block, a transport protocol that allows Windows machines to communicate with each other and other devices for things like remote services and file and printer sharing. Attackers manipulate flaws in how SMB handles certain packets to remotely execute any code they want. Once they have that foothold into that initial target device, they can then fan out across a network.

    'It's incredible that a tool which was used by intelligence services is now publicly available and so widely used amongst malicious actors.'

Vikram Thakur, Symantec

Microsoft released its EternalBlue patches on March 14 of last year. But security update adoption is spotty, especially on corporate and institutional networks. Within two months, EternalBlue was the centerpiece of the worldwide WannaCry ransomware attacks that were ultimately traced to North Korean government hackers. As WannaCry hit, Microsoft even took the "highly unusual step" of issuing patches for the still popular, but long-unsupported Windows XP and Windows Server 2003 operating systems.

In the aftermath of WannaCry, Microsoft and others criticized the NSA for keeping the EternalBlue vulnerability a secret for years instead of proactively disclosing it for patching. Some reports estimate that the NSA used and continued to refine the EternalBlue exploit for at least five years, and only warned Microsoft when the agency discovered that the exploit had been stolen. EternalBlue can also be used in concert with other NSA exploits released by the Shadow Brokers, like the kernel backdoor known as DarkPulsar, which burrows deep into the trusted core of a computer where it can often lurk undetected.
Eternal Blues

The versatility of the tool has made it an appealing workhorse for hackers. And though WannaCry raised EternalBlue's profile, many attackers had already realized the exploit's potential by then.

Within days of the Shadow Brokers release, security analysts say that they began to see bad actors using EternalBlue to extract passwords from browsers, and to install malicious cryptocurrency miners on target devices.

more at link
he humanities are not just dying -- they are almost dead. In Scotland, the ancient Chairs in Humanity (which is to say, Latin) have almost disappeared in the past few decades: abolished, left vacant, or merged into chairs of classics. The University of Oxford has revised its famed Literae Humaniores course, "Greats," into something resembling a technical classics degree. Both of those were throwbacks to an era in which Latin played the central, organizing role in the humanities. The loss of these vestigial elements reveals a long and slow realignment, in which the humanities have become a loosely defined collection of technical disciplines.

The result of this is deep conceptual confusion about what the humanities are and the reason for studying them in the first place. I do not intend to address the former question here -- most of us know the humanities when we see them.

Instead I wish to address the other question: the reason for studying them in the first place. This is of paramount importance. After all, university officials, deans, provosts, and presidents all are far more likely to know how to construct a Harvard Business School case study than to parse a Greek verb, more familiar with flowcharts than syllogisms, more conversant in management-speak than the riches of the English language. Hence the oft-repeated call to "make the case for the humanities."

Such an endeavor is fraught with ambiguities. Vulgar conservative critiques of the humanities are usually given the greatest exposure, and yet it is often political (and religious) conservatives who have labored the most mightily to foster traditional humanistic disciplines. Left defenders of the humanities have defended their value in the face of an increasingly corporate and crudely economic world, and yet they have also worked to gut some of the core areas of humanistic inquiry -- "Western civ and all that" -- as indelibly tainted by patriarchy, racism, and colonialism.

Academic overproduction has always been a feature of the university and always will be.
The humanities have both left and right defenders and left and right critics. The left defenders of the humanities are notoriously bad at coming up with a coherent, effective defense, but they have been far more consistent in defending the "useless" disciplines against politically and economically charged attacks. The right defenders of the humanities have sometimes put forward a strong and cogent defense of their value, but they have had little sway when it comes to confronting actual attacks on the humanities by conservative politicians. The sad truth is that instead of forging a transideological apology for humanistic pursuits, this ambiguity has led to the disciplines' being squeezed on both sides.

Indeed, both sides enable the humanities' adversaries. Conservatives who seek to use the coercive and financial power of the state to correct what they see as ideological abuses within the professoriate are complicit in the destruction of the old-fashioned and timeless scholarship they supposedly are defending. It is self-defeating to make common cause with corporate interests just to punish the political sins of liberal professors. Progressives who want to turn the humanities into a laboratory for social change, a catalyst for cultural revolution, a training camp for activists, are guilty of the same instrumentalization. When they impose de facto ideological litmus tests for scholars working in every field, they betray their conviction that the humanities exist only to serve contemporary political and social ends.

Caught in the middle are the humanities scholars who simply want to do good work in their fields; to read things and think about what they mean; to tease out conclusions about the past and present through a careful analysis of evidence; to delve deeply into language, art, artifact, culture, and nature. This is what the university was established to do.

To see this, one must first understand that the popular critiques of the humanities -- overspecialization, overproduction, too little teaching -- are fundamentally misguided. Often well-meaning critics think they are attacking the decadence and excess of contemporary humanities scholarship. In fact, they are striking at the very heart of the humanities as they have existed for centuries.
read the cases at the link.
Politics and Current Events / Pensions
Spoiler (click to show/hide)
I've met dots that existed only to inform me of the existence of other dots, new dots, dots with almost no meaning at all.
The sudden arrival of a new class of tech skeptic, the industry apostate, has only complicated the discussion. Late last year, the co-inventor of the Facebook "like," Justin Rosenstein, called it a "bright ding of pseudopleasure"; in January, the investment firm Jana Partners, a shareholder in Apple, wrote a letter to the company warning that its products "may be having unintentional negative consequences."

All but conjuring Oppenheimer at White Sands, these critics offer broadsides, warning about addictive design tricks and profit-driven systems eroding our humanity. But it's hard to discern a collective message in their garment-rending: Is it design that needs fixing? Tech? Capitalism? This lack of clarity may stem from the fact that these people are not ideologues but reformists. They tend to believe that companies should be more responsible -- and users must be, too. But with rare exceptions, the reformists stop short of asking the uncomfortable questions: Is it possible to reform profit-driven systems that turn attention into money? In such a business, can you even separate addiction from success?

Perhaps this is unfair -- the reformists are trying. But while we wait for a consensus, or a plan, allow me to suggest a starting point: the dots. Little. Often red. Sometimes with numbers. Commonly seen at the corners of app icons, where they are known in the trade as badges, they are now proliferating across once-peaceful interfaces on a steep epidemic curve. They alert us to things that need to be checked: unread messages; new activities; pending software updates; announcements; unresolved problems. As they've spread, they've become a rare commonality in the products that we -- and the remorseful technologists -- are so worried about. If not the culprits, the dots are at least accessories to most of the supposed crimes of addictive app design.

When platforms or services sense their users are disengaged, whether from social activities, work or merely a continued contribution to corporate profitability, dots are deployed: outside, inside, wherever they might be seen. I've met dots that existed only to inform me of the existence of other dots, new dots, dots with almost no meaning at all; a dot on my Instagram app led me to another dot within it, which informed me that something had happened on Facebook: Someone I barely know had posted for the first time in a while. These dots are omnipresent, leading everywhere and ending nowhere. So maybe there's something to be gained by connecting them.

The prototypical modern dot -- stop-sign red, with numbers, round, maddening -- was popularized with Mac OS X, the first version of which was released nearly 20 years ago. It was used most visibly as part of Apple's Mail app, perched atop an icon of a blue postage stamp, in a new and now ever-present dock full of apps. It contained a number representing your unread messages. But it wasn't until the launch of the iPhone, in 2007, that dots transformed from a simple utility into a way of life -- from a solution into a cause unto themselves.

That year, we got the very first glimpse of the iPhone's home screen, in Steve Jobs's hand, onstage at MacWorld. It showed three dots, ringed in white: 1 unread text; 5 calls or voice mail messages; 1 email. Jobs set about showing off the apps, opening them, eliminating the dots. Eventually, when the iPhone was opened to outside developers, badge use accelerated. As touch-screen phones careered toward ubiquity, and as desktop interfaces and website design and mobile operating systems huddled together around a crude and adapting set of visual metaphors, the badge was ascendant.

On Windows desktop computers, they tended to be blue and lived in the lower right corner. On BlackBerrys, red, with a white asterisk in the middle. On social media, in apps and on websites, badge design was more creative, appearing as little speech bubbles or as rectangles. They make appearances on Facebook and across Google products, perhaps most notoriously on the ill-fated Google Plus social network, where blocky badges were filled with inexplicably, desperately high numbers. (This tactic has since spread, obnoxiously, to news sites and, inexplicably, to comment sections.) Android itself has remained officially unbadged, but the next version of the operating system, called Oreo, will include them by default, completing their invasion.

What's so powerful about the dots is that until we investigate them, they could signify anything: a career-altering email; a reminder that Winter Sales End Soon; a match, a date, a "we need to talk." The same badge might lead to word that Grandma's in the hospital or that, according to a prerecorded voice, the home-security system you don't own is in urgent need of attention or that, for the 51st time today, someone has posted in the group chat.

New and flourishing modes of socialization amount, in the most abstract terms, to the creation and reduction of dots, and the experience of their attendant joys and anxieties. Dots are deceptively, insidiously simple: They are either there or they're not; they contain a number, and that number has a value. But they imbue whatever they touch with a spirit of urgency, reminding us that behind each otherwise static icon is unfinished business. They don't so much inform us or guide us as correct us: You're looking there, but you should be looking here. They're a lawn that must be mowed. Boils that must be lanced, or at least scabs that itch to be picked. They're Bubble Wrap laid over your entire digital existence.

To their credit, the big tech companies seem to be aware of the problem, at least in the narrow terms of user experience. In Google's guide for application developers, the company makes a gentle attempt to pre-empt future senseless dot deployment. "Don't badge every notification, as there are cases where badges don't make sense," the company suggests. Apple, in its guidelines, seems a bit more fed up. "Minimize badging," it says. "Don't overwhelm users by connecting badging with a huge amount of information that changes frequently. Use it to present brief, essential information and atypical content changes that are highly likely to be of interest."

These companies know better than anyone that dots are a problem, but they also know that dots work. Late last year, a red badge burbled to the surface next to millions of iPhone users' Settings apps. It looked as though it might be an update, but it turned out to be a demand: Finish adding your credit card to Apple Pay, or the dot stays put. Apple might as well have said: Give us your credit card number, or we will annoy you until you do.

The lack of consensus within the mounting resistance to Big Tech can also be found within the perimeter of the dot. After all, it's where the most dangerous conflations take place: of what we need, and what we're told we need; of what purpose our software serves to us, and us to it; of dismissal with fulfillment. The dot is where ill-gotten attention is laundered into legitimate-seeming engagement. On this, our most influential tech companies seem to agree. Maybe our self-appointed saviors can, too.
The 100-page report, titled "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," boasts 26 experts from 14 different institutions and organizations, including Oxford University's Future of Humanity Institute, Cambridge University's Centre for the Study of Existential Risk, Elon Musk's OpenAI, and the Electronic Frontier Foundation. The report builds upon a two-day workshop held at Oxford University back in February of last year. In the report, the authors detail some of the ways AI could make things generally unpleasant in the next few years, focusing on three security domains of note--the digital, physical, and political arenas--and how the malicious use of AI could upset each of these.

"It is often the case that AI systems don't merely reach human levels of performance but significantly surpass it," said Miles Brundage, a Research Fellow at Oxford University's Future of Humanity Institute and a co-author of the report, in a statement. "It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour."

Indeed, the big takeaway of the report is that AI is now on the cusp of being a tremendously negative disruptive force as rival states, criminals, and terrorists use the scale and efficiency of AI to launch finely-targeted and highly efficient attacks.

"As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats, and a change to the typical character of threats," write the authors in the new report.

They warn that the cost of attacks will be lowered owing to the scalable use of AI and the offloading of tasks typically performed by humans. Similarly, new threats may emerge through the use of systems that will complete tasks normally too impractical or onerous for humans.

"We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems," they write.

In terms of specifics, the authors warn of cyber attacks involving automated hacking, spear phishing, speech synthesis to impersonate targets, and "data poisoning." The advent of drones and semi- and fully-autonomous systems introduces an entirely new class of risks; the nightmarish scenarios include the deliberate crashing of multiple self-driving vehicles, coordinated attacks using thousands of micro-drones, converting commercial drones into face-recognizing assassins, and holding critical infrastructures for ransom. Politically, AI could be used to sway popular opinion, create highly targeted propaganda, and spread fake--but perhaps highly believable--posts and videos. AI will enable better surveillance technologies, both in public and private spaces.

"We also expect novel attacks that take advantage of an improved capacity to analyze human behaviors, moods, and beliefs on the basis of available data," add the authors. "These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates."

Sadly, the era of "fake news" is already upon us. It's becoming increasingly difficult to tell fact from fiction. Russia's apparent misuse of social media during the last US presidential election showed the potential for state actors to use social networks in nefarious ways. In some respects, the new report has a "tell us something we didn't already know" aspect to it.

Seán Ó hÉigeartaigh, Executive Director of Cambridge University's Centre for the Study of Existential Risk and a co-author of the new study, says hype used to outstrip fact when it came to our appreciation of AI and machine learning--but those days are now long gone. "This report looks at the practices that just don't work anymore--and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable--and what type of laws and international regulations might work in tandem with this," he explained.

To mitigate many of these emerging threats, Ó hÉigeartaigh and his colleagues presented five high-level recommendations:

        AI and ML researchers and engineers should acknowledge the dual use nature of their research
        Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI
        Best practices should be identified from other high-stakes technical domains, including computer security and other dual use technologies, and imported where applicable to the case of AI
        The development of normative and ethical frameworks should be prioritized in each of these domains
        The range of stakeholders and experts involved in discussions of these challenges should be expanded

In addition to these strategies, the authors say a "rethinking" of cyber security is needed, along with investments in institutional and technological solutions. Less plausibly, they say developers should adopt a "culture of responsibility" and consider the powers of data sharing and openness (good luck with that).

Ilia Kolochenko, CEO of web security company High-Tech Bridge, believes the authors of the new report are overstating the risks, and that it'll be business as usual over the next decade. "First of all, we should clearly distinguish between Strong AI--artificial intelligence, which is capable of replacing the human brain--and the generally misused 'AI' term that has become amorphous and ambiguous," explained Kolochenko in a statement emailed to Gizmodo.

He says criminals have already been using simple machine-learning algorithms to increase the efficiency of their attacks, but these efforts have been successful because of basic cyber security deficiencies and omissions in organizations. To Kolochenko, machine learning is merely an "auxiliary accelerator."

"One should also bear in mind that [artificial intelligence and machine learning is] being used by the good guys to fight cybercrime more efficiently too," he added. "Moreover, development of AI technologies usually requires expensive long term investments that Black Hats [malicious hackers] typically cannot afford. Therefore, I don't see substantial risks or revolutions that may happen in the digital space because of AI in the next five years at least."

Kolochenko is not wrong when he says that AI will be used to mitigate many of the threats made possible by AI, but to say that no "substantial" risks will emerge in the coming years seems a bit pie-in-the-sky. Sadly, the warnings presented in the new report will likely fall on deaf ears until people start to feel the ill effects of AI at a personal level. For now, it's all a bit too abstract for citizens to care, and politicians aren't yet prepared to deal with something so intangible and seemingly futuristic. In the meantime, we should remain wary of the risks, work to apply the recommendations proposed by these authors, and pound the message over and over to the best of our abilities.

link to the actual report
Statistics came well before computers. It would be very different if it were the other way around.

The stats most people learn in high school or college come from the time when computations were done with pen and paper. "Statistics were constrained by the computational technology available at the time," says Stanford statistics professor Robert Tibshirani. "People use certain methods because that is how it all started and that's what they are used to. It's hard to change it."

People who have taken intro statistics courses might recognize terms like "normal distribution," "t-distribution," and "least squares regression." We learn about them, in large part, because these were convenient things to calculate with the tools available in the early 20th century. We shouldn't be learning this stuff anymore--or, at least, it shouldn't be the first thing we learn. There are better options.

As a former data scientist, there is no question I get asked more than, "What is the best way to learn statistics?" I always give the same answer: Read An Introduction to Statistical Learning. Then, if you finish that and want more, read The Elements of Statistical Learning. These two books, written by statistics professors at Stanford University, the University of Washington, and the University Southern California, are the most intuitive and relevant books I've found on how to do statistics with modern technology. Tibsharani is a coauthor of both. You can download them for free.
Number crunchers

The books are based on the concept of "statistical learning," a mashup of stats and machine learning. The field of machine learning is all about feeding huge amounts of data into algorithms to make accurate predictions. Statistics is concerned with predictions as well, says Tibshirani, but also with determining how confident we can be about the importance of certain inputs.

This is important in areas like medicine, where a researcher doesn't just want to know whether a medicine worked, but also why it worked. Statistical learning is meant to take the best ideas from machine learning and computer science, and explain how they can be used and interpreted through a statistician's lens.

The beauty of these books is that they make seemingly impenetrable concepts--"cross-validation," "logistical regression," "support vector machines"--easily understandable. This is because the authors focus on intuition rather than mathematics. Unlike many statisticians, Tibshirani and his coauthors don't come from a math background. He believes this helps them think conceptually. "We try to explain [concepts] intuitively by explaining the underlying idea first," he says. "Then we give examples of a situation you would expect it work. And also, a situation where it might not work. I think people really appreciate that." I certainly did.

For example, a section of An Introduction to Statistical Learning is dedicated to explaining the use of "bootstrapping"--a statistical technique only available in the age of computers. Bootstrapping is a way to assess the accuracy of an estimate by generating multiple datasets from the same data.

For example, lets say you collected the weights of 1,000 randomly selected adult women in the US, and found that the average was 130 pounds. How confident can you be in this number? In conventional statistics, to answer this question you would use a formula developed more than a century ago, which relies on many assumptions. Today, rather than make those assumptions, you can use a computer to take thousands of samples of 500 people from your original 1,000 (this is the bootstrapping) and see how many of these results are close to 130. If most of them are, you can be more confident in the estimate.
Theory and application

These books, mercifully, don't require high-level math, like multivariate calculus or linear algebra. (If you're into that sort of thing, there is a wealth of worthy but dry academic literature out there for you.) "While knowledge of those topics is very valuable, we believe that they are not required in order to develop a solid conceptual understanding of how statistical learning methods work, and how they should be applied," says Daniela Witten, a coauthor of An Introduction to Statistical Learning.

Helpfully, the books also provide code you can use to apply the tools with the statistical programming language R. I recommend putting their examples to work on a dataset you are excited about. If you are into novels, use it to analyze Goodreads ratings. If you like basketball, apply their examples to numbers at Basketball Reference. The statistical learning tools are wonderful in themselves, but I've found they work best for people who are motivated by a personal or professional project.

Data and statistics are an increasingly important part of modern life, and nearly everyone would be better off with a deeper understanding of the tools that help explain our world. Even if you don't want to become a data analyst--which happens to be one of the fastest-growing jobs out there, just so you know--these books are invaluable guides to help explain what's going on.
lots of the text is links at the article.
natesilver: Again, a lot of this is just that David Brooks had a party (the GWB-era GOP) that he once mostly agreed with and now he doesn't have one. Which is annoying for David Brooks but doesn't really provide much evidence either way in terms of broader public sentiment.
Report: The Failure of Policy Planning in California's Charter School Facility Funding

By In the Public Interest

California has more charter schools than any other state in the nation, in large part because of generous public funding and subsidies to lease, build, or buy school buildings. But much of this public investment, hundreds of millions of dollars, has been misspent on schools that do not fulfill the intent of state charter school policy and undermine the financial viability of California's public school districts.

In the report, Spending Blind: The Failure of Policy Planning in California's Charter School Facility Funding, In the Public Interest reveals that a substantial portion of the more than $2.5 billion in tax dollars or taxpayer subsidized financing spent on California charter school facilities in the past 15 years has been misspent on: schools that underperformed nearby traditional public schools; schools built in districts that already had enough classroom space; schools that were found to have discriminatory enrollment policies; and in the worst cases, schools that engaged in unethical or corrupt practices.


The report's key findings include:

    Over the past 15 years, California charter schools have received more than $2.5 billion in tax dollars or taxpayer subsidized funds to lease, build, or buy school buildings.
    Nearly 450 charter schools have opened in places that already had enough classroom space for all students--and this overproduction of schools was made possible by generous public support, including $111 million in rent, lease, or mortgage payments picked up by taxpayers, $135 million in general obligation bonds, and $425 million in private investments subsidized with tax credits or tax exemptions.
    For three-quarters of California charter schools, the quality of education on offer--based on state and charter industry standards--is worse than that of a nearby traditional public school that serves a demographically similar population. Taxpayers have provided these schools with an estimated three-quarters of a billion dollars in direct funding and an additional $1.1 billion in taxpayer-subsidized financing.
    Even by the charter industry's standards, the worst charter schools receive generous facility funding. The California Charter Schools Association identified 161 charter schools that ranked in the bottom 10% of schools serving comparable populations last year, but even these schools received more than $200 million in tax dollars and tax-subsidized funding.
    At least 30% of charter schools were both opened in places that had no need for additional seats and also failed to provide an education superior to that available in nearby public schools. This number is almost certainly underestimated, but even at this rate, Californians provided these schools combined facilities funding of more than $750 million, at a net cost to taxpayers of nearly $400 million.
    Public facilities funding has been disproportionately concentrated among the less than one-third of schools that are owned by Charter Management Organizations (CMOs) that operate chains of between three and 30 schools. An even more disproportionate share of funding has been taken by just four large CMO chains--Aspire, KIPP, Alliance, and Animo/Green Dot.
    Since 2009, the 253 schools found by the American Civil Liberties Union of Southern California to maintain discriminatory enrollment policies have been awarded a collective $75 million under the SB740 program, $120 million in general obligation bonds, and $150 million in conduit bond financing.
    CMOs have used public tax dollars to buy private property. The Alliance College-Ready Public Schools network of charter schools, for instance, has benefited from more than $110 million in federal and state taxpayer support for its facilities, which are not owned by the public, but are part of a growing empire of privately owned Los Angeles-area real estate now worth in excess of $200 million.
you can download the whole report at the link.

Objective To assess the prospective associations between consumption of ultra-processed food and risk of cancer.

Design Population based cohort study.

Setting and participants 104 980 participants aged at least 18 years (median age 42.8 years) from the French NutriNet-Santé cohort (2009-17). Dietary intakes were collected using repeated 24 hour dietary records, designed to register participants' usual consumption for 3300 different food items. These were categorised according to their degree of processing by the NOVA classification.

Main outcome measures Associations between ultra-processed food intake and risk of overall, breast, prostate, and colorectal cancer assessed by multivariable Cox proportional hazard models adjusted for known risk factors.

Results Ultra-processed food intake was associated with higher overall cancer risk (n=2228 cases; hazard ratio for a 10% increment in the proportion of ultra-processed food in the diet 1.12 (95% confidence interval 1.06 to 1.18); P for trend<0.001) and breast cancer risk (n=739 cases; hazard ratio 1.11 (1.02 to 1.22); P for trend=0.02). These results remained statistically significant after adjustment for several markers of the nutritional quality of the diet (lipid, sodium, and carbohydrate intakes and/or a Western pattern derived by principal component analysis).

Conclusions In this large prospective study, a 10% increase in the proportion of ultra-processed foods in the diet was associated with a significant increase of greater than 10% in risks of overall and breast cancer. Further studies are needed to better understand the relative effect of the various dimensions of processing (nutritional composition, food additives, contact materials, and neoformed contaminants) in these associations.