Skip to main content

TR Memescape

  • Thanks, Obama!

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Testy Calibrate

Science / Intelligence predicted through DNA?

Since Intelligence cannot even be defined in a rigorous manner and definitions such as there are are highly contentious, I have to assume this is sensationalism. But...
Science / Physiology study design?
This sort of blew my mind in that kind of WTF? way that the serious papers that Alan Sokal was imitating can. And the authors include two Sokal's which makes it even weirder.
Is NIH/NCBI just publishing whatever people send? Is there a reason this review is possible to conduct? April fools?  the questions are endless. It is cited in several other papers. You can see them:
Earthing: Health Implications of Reconnecting the Human Body to the Earth's Surface Electrons
Gaétan Chevalier, 1, 2 , * Stephen T. Sinatra, 3 James L. Oschman, 4 Karol Sokal, 5 and Pawel Sokal 6
Author information ► Article notes ► Copyright and License information ►
This article has been cited by other articles in PMC.
Go to:

Environmental medicine generally addresses environmental factors with a negative impact on human health. However, emerging scientific research has revealed a surprisingly positive and overlooked environmental factor on health: direct physical contact with the vast supply of electrons on the surface of the Earth. Modern lifestyle separates humans from such contact. The research suggests that this disconnect may be a major contributor to physiological dysfunction and unwellness. Reconnection with the Earth's electrons has been found to promote intriguing physiological changes and subjective reports of well-being. Earthing (or grounding) refers to the discovery of benefits--including better sleep and reduced pain--from walking barefoot outside or sitting, working, or sleeping indoors connected to conductive systems that transfer the Earth's electrons from the ground into the body. This paper reviews the earthing research and the potential of earthing as a simple and easily accessed global modality of significant clinical importance.
1. Introduction

Environmental medicine focuses on interactions between human health and the environment, including factors such as compromised air and water and toxic chemicals, and how they cause or mediate disease. Omnipresent throughout the environment is a surprisingly beneficial, yet overlooked global resource for health maintenance, disease prevention, and clinical therapy: the surface of the Earth itself. It is an established, though not widely appreciated fact, that the Earth's surface possesses a limitless and continuously renewed supply of free or mobile electrons. The surface of the planet is electrically conductive (except in limited ultradry areas such as deserts), and its negative potential is maintained (i.e., its electron supply replenished) by the global atmospheric electrical circuit [1, 2].

Mounting evidence suggests that the Earth's negative potential can create a stable internal bioelectrical environment for the normal functioning of all body systems. Moreover, oscillations of the intensity of the Earth's potential may be important for setting the biological clocks regulating diurnal body rhythms, such as cortisol secretion [3].

It is also well established that electrons from antioxidant molecules neutralize reactive oxygen species (ROS, or in popular terms, free radicals) involved in the body's immune and inflammatory responses. The National Library of Medicine's online resource PubMed lists 7021 studies and 522 review articles from a search of "antioxidant + electron + free radical" [3]. It is assumed that the influx of free electrons absorbed into the body through direct contact with the Earth likely neutralize ROS and thereby reduce acute and chronic inflammation [4]. Throughout history, humans mostly walked barefoot or with footwear made of animal skins. They slept on the ground or on skins. Through direct contact or through perspiration-moistened animal skins used as footwear or sleeping mats, the ground's abundant free electrons were able to enter the body, which is electrically conductive [5]. Through this mechanism, every part of the body could equilibrate with the electrical potential of the Earth, thereby stabilizing the electrical environment of all organs, tissues, and cells.

Modern lifestyle has increasingly separated humans from the primordial flow of Earth's electrons. For example, since the 1960s, we have increasingly worn insulating rubber or plastic soled shoes, instead of the traditional leather fashioned from hides. Rossi has lamented that the use of insulating materials in post-World War II shoes has separated us from the Earth's energy field [6]. Obviously, we no longer sleep on the ground as we did in times past.

During recent decades, chronic illness, immune disorders, and inflammatory diseases have increased dramatically, and some researchers have cited environmental factors as the cause [7]. However, the possibility of modern disconnection with the Earth's surface as a cause has not been considered. Much of the research reviewed in this paper points in that direction.

In the late 19th century, a back-to-nature movement in Germany claimed many health benefits from being barefoot outdoors, even in cold weather [8]. In the 1920s, White, a medical doctor, investigated the practice of sleeping grounded after being informed by some individuals that they could not sleep properly "unless they were on the ground or connected to the ground in some way," such as with copper wires attached to grounded-to-Earth water, gas, or radiator pipes. He reported improved sleeping using these techniques [9]. However, these ideas never caught on in mainstream society.

At the end of the last century, experiments initiated independently by Ober in the USA [10] and K. Sokal and P. Sokal [11] in Poland revealed distinct physiological and health benefits with the use of conductive bed pads, mats, EKG- and TENS-type electrode patches, and plates connected indoors to the Earth outside. Ober, a retired cable television executive, found a similarity between the human body (a bioelectrical, signal-transmitting organism) and the cable used to transmit cable television signals. When cables are "grounded" to the Earth, interference is virtually eliminated from the signal. Furthermore, all electrical systems are stabilized by grounding them to the Earth. K. Sokal and P. Sokal, meanwhile, discovered that grounding the human body represents a "universal regulating factor in Nature" that strongly influences bioelectrical, bioenergetic, and biochemical processes and appears to offer a significant modulating effect on chronic illnesses encountered daily in their clinical practices.

Earthing (also known as grounding) refers to contact with the Earth's surface electrons by walking barefoot outside or sitting, working, or sleeping indoors connected to conductive systems, some of them patented, that transfer the energy from the ground into the body. Emerging scientific research supports the concept that the Earth's electrons induce multiple physiological changes of clinical significance, including reduced pain, better sleep, a shift from sympathetic to parasympathetic tone in the autonomic nervous system (ANS), and a blood-thinning effect. The research, along with many anecdotal reports, is presented in a new book entitled Earthing [12].
And, just for fun:
2. Review of Earthing Papers

The studies summarized below involve indoor-testing methods under controlled conditions that simulate being barefoot outdoors.

Has anyone ever heard of this? It seems like an unlikely candidate for the NIH to promote.
Science / humanzees
It is a bit of a stretch, but by no means impossible or even unlikely that a hybrid or a chimera combining a human being and a chimpanzee could be produced in a laboratory. After all, human and chimp (or bonobo) share, by most estimates, roughly 99 percent of their nuclear DNA. Granted this 1 percent difference presumably involves some key alleles, the new gene-editing tool CRISPR offers the prospect (for some, the nightmare) of adding and deleting targeted genes as desired. As a result, it is not unreasonable to foresee the possibility--eventually, perhaps, the likelihood--of producing "humanzees" or "chimphumans." Such an individual would not be an exact equal-parts-of-each combination, but would be neither human nor chimp: rather, something in between.

If that prospect isn't shocking enough, here is an even more controversial suggestion: Doing so would be a terrific idea.

The year 2018 is the bicentennial of Mary Shelley's Frankenstein, subtitled the modern Prometheus. Haven't we learned that Promethean hubris leads only to disaster, as did the efforts of the fictional Dr. Frankenstein? But there are also other disasters, currently ongoing, such as the grotesque abuse of nonhuman animals, facilitated by what might well be the most hurtful theologically-driven myth of all times: that human beings are discontinuous from the rest of the natural world, since we were specially created and endowed with souls, whereas "they"--all other creatures--were not.
Book cover of Frankenstein.Bernie Wrightson

Of course, all that we know of evolution (and by now, it's a lot) demands otherwise, since evolution's most fundamental take-home message is continuity. And it is in fact because of continuity--especially those shared genes--that humanzees or chimphumans could likely be produced. Moreover, I propose that the fundamental take-home message of such creation would be to drive a stake into the heart of that destructive disinformation campaign of discontinuity, of human hegemony over all other living things. There is an immense pile of evidence already demonstrating continuity, including but not limited to physiology, genetics, anatomy, embryology, and paleontology, but it is almost impossible to imagine how the most die-hard advocate of humans having a discontinuously unique biological status could continue to maintain this position if confronted with a real, functioning, human-chimp combination.1

It is also possible, however, that my suggestion is doubly fanciful, not only with respect to its biological feasibility, but also whether such a "creation" would have the impact that I propose--and hope. Thus, chimpanzees are widely known to be very similar to human beings: They make and use tools, engage in complex social behavior (including elaborate communication and long-lasting mother-offspring bonds), they laugh, grieve, and affirmatively reconcile after conflicts. They even look like us. Although such recognition has contributed to outrage about abusing chimps--as well as other primates in particular--in circus acts, laboratory experiments, and so forth, it has not generated notable resistance to hunting, imprisoning and eating other animal species, which, along with chimps themselves, are still considered by most people to be "other" and not aspects of "ourselves." (Chimps, moreover, are enthusiastically consumed in parts of equatorial Africa, where they are a prized component of "bush meat.")

   It is at least arguable that the ultimate benefit of teaching human beings their true nature would be worth the sacrifice paid by a few unfortunates.

story continues.

Shades of Omelas
General Discussion / Love your washing machine?

kind of a crazy article. Stuff we don't think about.
An elite Russian hacking team, a historic ransomware attack, an espionage group in the Middle East, and countless small time cryptojackers all have one thing in common. Though their methods and objectives vary, they all lean on leaked NSA hacking tool EternalBlue to infiltrate target computers and spread malware across networks.

Leaked to the public not quite a year ago, EternalBlue has joined a long line of reliable hacker favorites. The Conficker Windows worm infected millions of computers in 2008, and the Welchia remote code execution worm wreaked havoc 2003. EternalBlue is certainly continuing that tradition--and by all indications it's not going anywhere. If anything, security analysts only see use of the exploit diversifying as attackers develop new, clever applications, or simply discover how easy it is to deploy.

"When you take something that's weaponized and a fully developed concept and make it publicly available you're going to have that level of uptake," says Adam Meyers, vice president of intelligence at the security firm CrowdStrike. "A year later there are still organizations that are getting hit by EternalBlue--still organizations that haven't patched it."
The One That Got Away

EternalBlue is the name of both a software vulnerability in Microsoft's Windows operating system and an exploit the National Security Agency developed to weaponize the bug. In April 2017, the exploit leaked to the public, part of the fifth release of alleged NSA tools by the still mysterious group known as the Shadow Brokers. Unsurprisingly, the agency has never confirmed that it created EternalBlue, or anything else in the Shadow Brokers releases, but numerous reports corroborate its origin--and even Microsoft has publicly attributed its existence to the NSA.

The tool exploits a vulnerability in the Windows Server Message Block, a transport protocol that allows Windows machines to communicate with each other and other devices for things like remote services and file and printer sharing. Attackers manipulate flaws in how SMB handles certain packets to remotely execute any code they want. Once they have that foothold into that initial target device, they can then fan out across a network.

    'It's incredible that a tool which was used by intelligence services is now publicly available and so widely used amongst malicious actors.'

Vikram Thakur, Symantec

Microsoft released its EternalBlue patches on March 14 of last year. But security update adoption is spotty, especially on corporate and institutional networks. Within two months, EternalBlue was the centerpiece of the worldwide WannaCry ransomware attacks that were ultimately traced to North Korean government hackers. As WannaCry hit, Microsoft even took the "highly unusual step" of issuing patches for the still popular, but long-unsupported Windows XP and Windows Server 2003 operating systems.

In the aftermath of WannaCry, Microsoft and others criticized the NSA for keeping the EternalBlue vulnerability a secret for years instead of proactively disclosing it for patching. Some reports estimate that the NSA used and continued to refine the EternalBlue exploit for at least five years, and only warned Microsoft when the agency discovered that the exploit had been stolen. EternalBlue can also be used in concert with other NSA exploits released by the Shadow Brokers, like the kernel backdoor known as DarkPulsar, which burrows deep into the trusted core of a computer where it can often lurk undetected.
Eternal Blues

The versatility of the tool has made it an appealing workhorse for hackers. And though WannaCry raised EternalBlue's profile, many attackers had already realized the exploit's potential by then.

Within days of the Shadow Brokers release, security analysts say that they began to see bad actors using EternalBlue to extract passwords from browsers, and to install malicious cryptocurrency miners on target devices.

more at link
he humanities are not just dying -- they are almost dead. In Scotland, the ancient Chairs in Humanity (which is to say, Latin) have almost disappeared in the past few decades: abolished, left vacant, or merged into chairs of classics. The University of Oxford has revised its famed Literae Humaniores course, "Greats," into something resembling a technical classics degree. Both of those were throwbacks to an era in which Latin played the central, organizing role in the humanities. The loss of these vestigial elements reveals a long and slow realignment, in which the humanities have become a loosely defined collection of technical disciplines.

The result of this is deep conceptual confusion about what the humanities are and the reason for studying them in the first place. I do not intend to address the former question here -- most of us know the humanities when we see them.

Instead I wish to address the other question: the reason for studying them in the first place. This is of paramount importance. After all, university officials, deans, provosts, and presidents all are far more likely to know how to construct a Harvard Business School case study than to parse a Greek verb, more familiar with flowcharts than syllogisms, more conversant in management-speak than the riches of the English language. Hence the oft-repeated call to "make the case for the humanities."

Such an endeavor is fraught with ambiguities. Vulgar conservative critiques of the humanities are usually given the greatest exposure, and yet it is often political (and religious) conservatives who have labored the most mightily to foster traditional humanistic disciplines. Left defenders of the humanities have defended their value in the face of an increasingly corporate and crudely economic world, and yet they have also worked to gut some of the core areas of humanistic inquiry -- "Western civ and all that" -- as indelibly tainted by patriarchy, racism, and colonialism.

Academic overproduction has always been a feature of the university and always will be.
The humanities have both left and right defenders and left and right critics. The left defenders of the humanities are notoriously bad at coming up with a coherent, effective defense, but they have been far more consistent in defending the "useless" disciplines against politically and economically charged attacks. The right defenders of the humanities have sometimes put forward a strong and cogent defense of their value, but they have had little sway when it comes to confronting actual attacks on the humanities by conservative politicians. The sad truth is that instead of forging a transideological apology for humanistic pursuits, this ambiguity has led to the disciplines' being squeezed on both sides.

Indeed, both sides enable the humanities' adversaries. Conservatives who seek to use the coercive and financial power of the state to correct what they see as ideological abuses within the professoriate are complicit in the destruction of the old-fashioned and timeless scholarship they supposedly are defending. It is self-defeating to make common cause with corporate interests just to punish the political sins of liberal professors. Progressives who want to turn the humanities into a laboratory for social change, a catalyst for cultural revolution, a training camp for activists, are guilty of the same instrumentalization. When they impose de facto ideological litmus tests for scholars working in every field, they betray their conviction that the humanities exist only to serve contemporary political and social ends.

Caught in the middle are the humanities scholars who simply want to do good work in their fields; to read things and think about what they mean; to tease out conclusions about the past and present through a careful analysis of evidence; to delve deeply into language, art, artifact, culture, and nature. This is what the university was established to do.

To see this, one must first understand that the popular critiques of the humanities -- overspecialization, overproduction, too little teaching -- are fundamentally misguided. Often well-meaning critics think they are attacking the decadence and excess of contemporary humanities scholarship. In fact, they are striking at the very heart of the humanities as they have existed for centuries.
read the cases at the link.
Politics and Current Events / Pensions
Spoiler (click to show/hide)
I've met dots that existed only to inform me of the existence of other dots, new dots, dots with almost no meaning at all.
The sudden arrival of a new class of tech skeptic, the industry apostate, has only complicated the discussion. Late last year, the co-inventor of the Facebook "like," Justin Rosenstein, called it a "bright ding of pseudopleasure"; in January, the investment firm Jana Partners, a shareholder in Apple, wrote a letter to the company warning that its products "may be having unintentional negative consequences."

All but conjuring Oppenheimer at White Sands, these critics offer broadsides, warning about addictive design tricks and profit-driven systems eroding our humanity. But it's hard to discern a collective message in their garment-rending: Is it design that needs fixing? Tech? Capitalism? This lack of clarity may stem from the fact that these people are not ideologues but reformists. They tend to believe that companies should be more responsible -- and users must be, too. But with rare exceptions, the reformists stop short of asking the uncomfortable questions: Is it possible to reform profit-driven systems that turn attention into money? In such a business, can you even separate addiction from success?

Perhaps this is unfair -- the reformists are trying. But while we wait for a consensus, or a plan, allow me to suggest a starting point: the dots. Little. Often red. Sometimes with numbers. Commonly seen at the corners of app icons, where they are known in the trade as badges, they are now proliferating across once-peaceful interfaces on a steep epidemic curve. They alert us to things that need to be checked: unread messages; new activities; pending software updates; announcements; unresolved problems. As they've spread, they've become a rare commonality in the products that we -- and the remorseful technologists -- are so worried about. If not the culprits, the dots are at least accessories to most of the supposed crimes of addictive app design.

When platforms or services sense their users are disengaged, whether from social activities, work or merely a continued contribution to corporate profitability, dots are deployed: outside, inside, wherever they might be seen. I've met dots that existed only to inform me of the existence of other dots, new dots, dots with almost no meaning at all; a dot on my Instagram app led me to another dot within it, which informed me that something had happened on Facebook: Someone I barely know had posted for the first time in a while. These dots are omnipresent, leading everywhere and ending nowhere. So maybe there's something to be gained by connecting them.

The prototypical modern dot -- stop-sign red, with numbers, round, maddening -- was popularized with Mac OS X, the first version of which was released nearly 20 years ago. It was used most visibly as part of Apple's Mail app, perched atop an icon of a blue postage stamp, in a new and now ever-present dock full of apps. It contained a number representing your unread messages. But it wasn't until the launch of the iPhone, in 2007, that dots transformed from a simple utility into a way of life -- from a solution into a cause unto themselves.

That year, we got the very first glimpse of the iPhone's home screen, in Steve Jobs's hand, onstage at MacWorld. It showed three dots, ringed in white: 1 unread text; 5 calls or voice mail messages; 1 email. Jobs set about showing off the apps, opening them, eliminating the dots. Eventually, when the iPhone was opened to outside developers, badge use accelerated. As touch-screen phones careered toward ubiquity, and as desktop interfaces and website design and mobile operating systems huddled together around a crude and adapting set of visual metaphors, the badge was ascendant.

On Windows desktop computers, they tended to be blue and lived in the lower right corner. On BlackBerrys, red, with a white asterisk in the middle. On social media, in apps and on websites, badge design was more creative, appearing as little speech bubbles or as rectangles. They make appearances on Facebook and across Google products, perhaps most notoriously on the ill-fated Google Plus social network, where blocky badges were filled with inexplicably, desperately high numbers. (This tactic has since spread, obnoxiously, to news sites and, inexplicably, to comment sections.) Android itself has remained officially unbadged, but the next version of the operating system, called Oreo, will include them by default, completing their invasion.

What's so powerful about the dots is that until we investigate them, they could signify anything: a career-altering email; a reminder that Winter Sales End Soon; a match, a date, a "we need to talk." The same badge might lead to word that Grandma's in the hospital or that, according to a prerecorded voice, the home-security system you don't own is in urgent need of attention or that, for the 51st time today, someone has posted in the group chat.

New and flourishing modes of socialization amount, in the most abstract terms, to the creation and reduction of dots, and the experience of their attendant joys and anxieties. Dots are deceptively, insidiously simple: They are either there or they're not; they contain a number, and that number has a value. But they imbue whatever they touch with a spirit of urgency, reminding us that behind each otherwise static icon is unfinished business. They don't so much inform us or guide us as correct us: You're looking there, but you should be looking here. They're a lawn that must be mowed. Boils that must be lanced, or at least scabs that itch to be picked. They're Bubble Wrap laid over your entire digital existence.

To their credit, the big tech companies seem to be aware of the problem, at least in the narrow terms of user experience. In Google's guide for application developers, the company makes a gentle attempt to pre-empt future senseless dot deployment. "Don't badge every notification, as there are cases where badges don't make sense," the company suggests. Apple, in its guidelines, seems a bit more fed up. "Minimize badging," it says. "Don't overwhelm users by connecting badging with a huge amount of information that changes frequently. Use it to present brief, essential information and atypical content changes that are highly likely to be of interest."

These companies know better than anyone that dots are a problem, but they also know that dots work. Late last year, a red badge burbled to the surface next to millions of iPhone users' Settings apps. It looked as though it might be an update, but it turned out to be a demand: Finish adding your credit card to Apple Pay, or the dot stays put. Apple might as well have said: Give us your credit card number, or we will annoy you until you do.

The lack of consensus within the mounting resistance to Big Tech can also be found within the perimeter of the dot. After all, it's where the most dangerous conflations take place: of what we need, and what we're told we need; of what purpose our software serves to us, and us to it; of dismissal with fulfillment. The dot is where ill-gotten attention is laundered into legitimate-seeming engagement. On this, our most influential tech companies seem to agree. Maybe our self-appointed saviors can, too.
The 100-page report, titled "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," boasts 26 experts from 14 different institutions and organizations, including Oxford University's Future of Humanity Institute, Cambridge University's Centre for the Study of Existential Risk, Elon Musk's OpenAI, and the Electronic Frontier Foundation. The report builds upon a two-day workshop held at Oxford University back in February of last year. In the report, the authors detail some of the ways AI could make things generally unpleasant in the next few years, focusing on three security domains of note--the digital, physical, and political arenas--and how the malicious use of AI could upset each of these.

"It is often the case that AI systems don't merely reach human levels of performance but significantly surpass it," said Miles Brundage, a Research Fellow at Oxford University's Future of Humanity Institute and a co-author of the report, in a statement. "It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour."

Indeed, the big takeaway of the report is that AI is now on the cusp of being a tremendously negative disruptive force as rival states, criminals, and terrorists use the scale and efficiency of AI to launch finely-targeted and highly efficient attacks.

"As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats, and a change to the typical character of threats," write the authors in the new report.

They warn that the cost of attacks will be lowered owing to the scalable use of AI and the offloading of tasks typically performed by humans. Similarly, new threats may emerge through the use of systems that will complete tasks normally too impractical or onerous for humans.

"We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems," they write.

In terms of specifics, the authors warn of cyber attacks involving automated hacking, spear phishing, speech synthesis to impersonate targets, and "data poisoning." The advent of drones and semi- and fully-autonomous systems introduces an entirely new class of risks; the nightmarish scenarios include the deliberate crashing of multiple self-driving vehicles, coordinated attacks using thousands of micro-drones, converting commercial drones into face-recognizing assassins, and holding critical infrastructures for ransom. Politically, AI could be used to sway popular opinion, create highly targeted propaganda, and spread fake--but perhaps highly believable--posts and videos. AI will enable better surveillance technologies, both in public and private spaces.

"We also expect novel attacks that take advantage of an improved capacity to analyze human behaviors, moods, and beliefs on the basis of available data," add the authors. "These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates."

Sadly, the era of "fake news" is already upon us. It's becoming increasingly difficult to tell fact from fiction. Russia's apparent misuse of social media during the last US presidential election showed the potential for state actors to use social networks in nefarious ways. In some respects, the new report has a "tell us something we didn't already know" aspect to it.

Seán Ó hÉigeartaigh, Executive Director of Cambridge University's Centre for the Study of Existential Risk and a co-author of the new study, says hype used to outstrip fact when it came to our appreciation of AI and machine learning--but those days are now long gone. "This report looks at the practices that just don't work anymore--and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable--and what type of laws and international regulations might work in tandem with this," he explained.

To mitigate many of these emerging threats, Ó hÉigeartaigh and his colleagues presented five high-level recommendations:

        AI and ML researchers and engineers should acknowledge the dual use nature of their research
        Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI
        Best practices should be identified from other high-stakes technical domains, including computer security and other dual use technologies, and imported where applicable to the case of AI
        The development of normative and ethical frameworks should be prioritized in each of these domains
        The range of stakeholders and experts involved in discussions of these challenges should be expanded

In addition to these strategies, the authors say a "rethinking" of cyber security is needed, along with investments in institutional and technological solutions. Less plausibly, they say developers should adopt a "culture of responsibility" and consider the powers of data sharing and openness (good luck with that).

Ilia Kolochenko, CEO of web security company High-Tech Bridge, believes the authors of the new report are overstating the risks, and that it'll be business as usual over the next decade. "First of all, we should clearly distinguish between Strong AI--artificial intelligence, which is capable of replacing the human brain--and the generally misused 'AI' term that has become amorphous and ambiguous," explained Kolochenko in a statement emailed to Gizmodo.

He says criminals have already been using simple machine-learning algorithms to increase the efficiency of their attacks, but these efforts have been successful because of basic cyber security deficiencies and omissions in organizations. To Kolochenko, machine learning is merely an "auxiliary accelerator."

"One should also bear in mind that [artificial intelligence and machine learning is] being used by the good guys to fight cybercrime more efficiently too," he added. "Moreover, development of AI technologies usually requires expensive long term investments that Black Hats [malicious hackers] typically cannot afford. Therefore, I don't see substantial risks or revolutions that may happen in the digital space because of AI in the next five years at least."

Kolochenko is not wrong when he says that AI will be used to mitigate many of the threats made possible by AI, but to say that no "substantial" risks will emerge in the coming years seems a bit pie-in-the-sky. Sadly, the warnings presented in the new report will likely fall on deaf ears until people start to feel the ill effects of AI at a personal level. For now, it's all a bit too abstract for citizens to care, and politicians aren't yet prepared to deal with something so intangible and seemingly futuristic. In the meantime, we should remain wary of the risks, work to apply the recommendations proposed by these authors, and pound the message over and over to the best of our abilities.

link to the actual report
Statistics came well before computers. It would be very different if it were the other way around.

The stats most people learn in high school or college come from the time when computations were done with pen and paper. "Statistics were constrained by the computational technology available at the time," says Stanford statistics professor Robert Tibshirani. "People use certain methods because that is how it all started and that's what they are used to. It's hard to change it."

People who have taken intro statistics courses might recognize terms like "normal distribution," "t-distribution," and "least squares regression." We learn about them, in large part, because these were convenient things to calculate with the tools available in the early 20th century. We shouldn't be learning this stuff anymore--or, at least, it shouldn't be the first thing we learn. There are better options.

As a former data scientist, there is no question I get asked more than, "What is the best way to learn statistics?" I always give the same answer: Read An Introduction to Statistical Learning. Then, if you finish that and want more, read The Elements of Statistical Learning. These two books, written by statistics professors at Stanford University, the University of Washington, and the University Southern California, are the most intuitive and relevant books I've found on how to do statistics with modern technology. Tibsharani is a coauthor of both. You can download them for free.
Number crunchers

The books are based on the concept of "statistical learning," a mashup of stats and machine learning. The field of machine learning is all about feeding huge amounts of data into algorithms to make accurate predictions. Statistics is concerned with predictions as well, says Tibshirani, but also with determining how confident we can be about the importance of certain inputs.

This is important in areas like medicine, where a researcher doesn't just want to know whether a medicine worked, but also why it worked. Statistical learning is meant to take the best ideas from machine learning and computer science, and explain how they can be used and interpreted through a statistician's lens.

The beauty of these books is that they make seemingly impenetrable concepts--"cross-validation," "logistical regression," "support vector machines"--easily understandable. This is because the authors focus on intuition rather than mathematics. Unlike many statisticians, Tibshirani and his coauthors don't come from a math background. He believes this helps them think conceptually. "We try to explain [concepts] intuitively by explaining the underlying idea first," he says. "Then we give examples of a situation you would expect it work. And also, a situation where it might not work. I think people really appreciate that." I certainly did.

For example, a section of An Introduction to Statistical Learning is dedicated to explaining the use of "bootstrapping"--a statistical technique only available in the age of computers. Bootstrapping is a way to assess the accuracy of an estimate by generating multiple datasets from the same data.

For example, lets say you collected the weights of 1,000 randomly selected adult women in the US, and found that the average was 130 pounds. How confident can you be in this number? In conventional statistics, to answer this question you would use a formula developed more than a century ago, which relies on many assumptions. Today, rather than make those assumptions, you can use a computer to take thousands of samples of 500 people from your original 1,000 (this is the bootstrapping) and see how many of these results are close to 130. If most of them are, you can be more confident in the estimate.
Theory and application

These books, mercifully, don't require high-level math, like multivariate calculus or linear algebra. (If you're into that sort of thing, there is a wealth of worthy but dry academic literature out there for you.) "While knowledge of those topics is very valuable, we believe that they are not required in order to develop a solid conceptual understanding of how statistical learning methods work, and how they should be applied," says Daniela Witten, a coauthor of An Introduction to Statistical Learning.

Helpfully, the books also provide code you can use to apply the tools with the statistical programming language R. I recommend putting their examples to work on a dataset you are excited about. If you are into novels, use it to analyze Goodreads ratings. If you like basketball, apply their examples to numbers at Basketball Reference. The statistical learning tools are wonderful in themselves, but I've found they work best for people who are motivated by a personal or professional project.

Data and statistics are an increasingly important part of modern life, and nearly everyone would be better off with a deeper understanding of the tools that help explain our world. Even if you don't want to become a data analyst--which happens to be one of the fastest-growing jobs out there, just so you know--these books are invaluable guides to help explain what's going on.
lots of the text is links at the article.
natesilver: Again, a lot of this is just that David Brooks had a party (the GWB-era GOP) that he once mostly agreed with and now he doesn't have one. Which is annoying for David Brooks but doesn't really provide much evidence either way in terms of broader public sentiment.
Report: The Failure of Policy Planning in California's Charter School Facility Funding

By In the Public Interest

California has more charter schools than any other state in the nation, in large part because of generous public funding and subsidies to lease, build, or buy school buildings. But much of this public investment, hundreds of millions of dollars, has been misspent on schools that do not fulfill the intent of state charter school policy and undermine the financial viability of California's public school districts.

In the report, Spending Blind: The Failure of Policy Planning in California's Charter School Facility Funding, In the Public Interest reveals that a substantial portion of the more than $2.5 billion in tax dollars or taxpayer subsidized financing spent on California charter school facilities in the past 15 years has been misspent on: schools that underperformed nearby traditional public schools; schools built in districts that already had enough classroom space; schools that were found to have discriminatory enrollment policies; and in the worst cases, schools that engaged in unethical or corrupt practices.


The report's key findings include:

    Over the past 15 years, California charter schools have received more than $2.5 billion in tax dollars or taxpayer subsidized funds to lease, build, or buy school buildings.
    Nearly 450 charter schools have opened in places that already had enough classroom space for all students--and this overproduction of schools was made possible by generous public support, including $111 million in rent, lease, or mortgage payments picked up by taxpayers, $135 million in general obligation bonds, and $425 million in private investments subsidized with tax credits or tax exemptions.
    For three-quarters of California charter schools, the quality of education on offer--based on state and charter industry standards--is worse than that of a nearby traditional public school that serves a demographically similar population. Taxpayers have provided these schools with an estimated three-quarters of a billion dollars in direct funding and an additional $1.1 billion in taxpayer-subsidized financing.
    Even by the charter industry's standards, the worst charter schools receive generous facility funding. The California Charter Schools Association identified 161 charter schools that ranked in the bottom 10% of schools serving comparable populations last year, but even these schools received more than $200 million in tax dollars and tax-subsidized funding.
    At least 30% of charter schools were both opened in places that had no need for additional seats and also failed to provide an education superior to that available in nearby public schools. This number is almost certainly underestimated, but even at this rate, Californians provided these schools combined facilities funding of more than $750 million, at a net cost to taxpayers of nearly $400 million.
    Public facilities funding has been disproportionately concentrated among the less than one-third of schools that are owned by Charter Management Organizations (CMOs) that operate chains of between three and 30 schools. An even more disproportionate share of funding has been taken by just four large CMO chains--Aspire, KIPP, Alliance, and Animo/Green Dot.
    Since 2009, the 253 schools found by the American Civil Liberties Union of Southern California to maintain discriminatory enrollment policies have been awarded a collective $75 million under the SB740 program, $120 million in general obligation bonds, and $150 million in conduit bond financing.
    CMOs have used public tax dollars to buy private property. The Alliance College-Ready Public Schools network of charter schools, for instance, has benefited from more than $110 million in federal and state taxpayer support for its facilities, which are not owned by the public, but are part of a growing empire of privately owned Los Angeles-area real estate now worth in excess of $200 million.
you can download the whole report at the link.

Objective To assess the prospective associations between consumption of ultra-processed food and risk of cancer.

Design Population based cohort study.

Setting and participants 104 980 participants aged at least 18 years (median age 42.8 years) from the French NutriNet-Santé cohort (2009-17). Dietary intakes were collected using repeated 24 hour dietary records, designed to register participants' usual consumption for 3300 different food items. These were categorised according to their degree of processing by the NOVA classification.

Main outcome measures Associations between ultra-processed food intake and risk of overall, breast, prostate, and colorectal cancer assessed by multivariable Cox proportional hazard models adjusted for known risk factors.

Results Ultra-processed food intake was associated with higher overall cancer risk (n=2228 cases; hazard ratio for a 10% increment in the proportion of ultra-processed food in the diet 1.12 (95% confidence interval 1.06 to 1.18); P for trend<0.001) and breast cancer risk (n=739 cases; hazard ratio 1.11 (1.02 to 1.22); P for trend=0.02). These results remained statistically significant after adjustment for several markers of the nutritional quality of the diet (lipid, sodium, and carbohydrate intakes and/or a Western pattern derived by principal component analysis).

Conclusions In this large prospective study, a 10% increase in the proportion of ultra-processed foods in the diet was associated with a significant increase of greater than 10% in risks of overall and breast cancer. Further studies are needed to better understand the relative effect of the various dimensions of processing (nutritional composition, food additives, contact materials, and neoformed contaminants) in these associations.
Computers and Technology / dynamic maps online
Has anyone ever used the ArcGIS web server? If so, have you had any problems or issues with limitations? (MartinM I'm looking at you here)
Politics and Current Events / dystopian realities
Imagine that this is your daily life: While on your way to work or on an errand, every 100 meters you pass a police blockhouse. Video cameras on street corners and lamp posts recognize your face and track your movements. At multiple checkpoints, police officers scan your ID card, your irises and the contents of your phone. At the supermarket or the bank, you are scanned again, your bags are X-rayed and an officer runs a wand over your body -- at least if you are from the wrong ethnic group. Members of the main group are usually waved through.

You have had to complete a survey about your ethnicity, your religious practices and your "cultural level"; about whether you have a passport, relatives or acquaintances abroad, and whether you know anyone who has ever been arrested or is a member of what the state calls a "special population."

This personal information, along with your biometric data, resides in a database tied to your ID number. The system crunches all of this into a composite score that ranks you as "safe," "normal" or "unsafe."Based on those categories, you may or may not be allowed to visit a museum, pass through certain neighborhoods, go to the mall, check into a hotel, rent an apartment, apply for a job or buy a train ticket. Or you may be detained to undergo re-education, like many thousands of other people.

A science-fiction dystopia? No.
The new therapies are based on research begun in the 1980s showing that people in the throes of a migraine attack have high levels of a protein called calcitonin gene-related peptide (CGRP) in their blood.

Step by step, researchers tracked and studied this neurochemical's effects. They found that injecting the peptide into the blood of people prone to migraines triggers migraine-like headaches, whereas people not prone to migraines experienced, at most, mild pain. Blocking transmission of CGRP in mice appeared to prevent migraine-like symptoms. And so a few companies started developing a pill that might do the same in humans.

Clinical trials of the first pills were effective against migraine but halted in 2011 over concerns about potential liver damage. So, four pharmaceutical companies rejiggered their approach. To bypass the liver, all four instead looked to an injectable therapy called monoclonal antibodies -- tiny immune molecules most commonly used to treat cancer. Not only do these bypass the liver to block CGRP, but one injection appears to be effective for up to three months with almost no noticeable side effects.

aside from the  :staregonk:  bolded part, wow. There is pretty much no pain like a migraine. I used to get them a lot but the condition seems to have passed for me. I know several chronic sufferers though.

ETA: I don't like this forum for this topic. We could use a health forum.
Science / Things to worry about
The Earth's magnetic field protects our planet from dangerous solar and cosmic rays, like a giant shield. As the poles switch places (or try to), that shield is weakened; scientists estimate that it could waste away to as little as a tenth of its usual force. The shield could be compromised for centuries while the poles move, allowing malevolent radiation closer to the surface of the planet for that whole time. Already, changes within the Earth have weakened the field over the South Atlantic so much that satellites exposed to the resulting radiation have experienced memory failure.

That radiation isn't hitting the surface yet. But at some point, when the magnetic field has dwindled enough, it could be a different story. Daniel Baker, director of the Laboratory for Atmospheric and Space Physics at the University of Colorado, Boulder, one of the world's experts on how cosmic radiation affects the Earth, fears that parts of the planet will become uninhabitable during a reversal. The dangers: devastating streams of particles from the sun, galactic cosmic rays, and enhanced ultraviolet B rays from a radiation-damaged ozone layer, to name just a few of the invisible forces that could harm or kill living creatures.

How bad could it be? Scientists have never established a link between previous pole reversals and catastrophes like mass extinctions. But the world of today is not the world of 780,000 years ago, when the poles last reversed, or even 40,000 years ago, when they tried to. Today, there are nearly 7.6 billion people on Earth, twice as many as in 1970. We have drastically changed the chemistry of the atmosphere and the ocean with our activities, impairing the life support system of the planet. Humans have built huge cities, industries and networks of roads, slicing up access to safer living spaces for many other creatures. We have pushed perhaps a third of all known species toward extinction and have imperiled the habitats of many more. Add cosmic and ultraviolet radiation to this mix, and the consequences for life on Earth could be ruinous.

And the perils are not just biological. The vast cyber-electric cocoon that has become the central processing system of modern civilization is in grave danger. Solar energetic particles can rip through the sensitive miniature electronics of the growing number of satellites circling the Earth, badly damaging them. The satellite timing systems that govern electric grids would be likely to fail. The grid's transformers could be torched en masse. Because grids are so tightly coupled with each other, failure would race across the globe, causing a domino run of blackouts that could last for decades.

 No lights. No computers. No cellphones. Even flushing a toilet or filling a car's gas tank would be impossible. And that's just for starters.

But these dangers are rarely considered by those whose job it is to protect the electronic pulse of civilization. More satellites are being put into orbit with more highly miniaturized (and therefore more vulnerable) electronics. The electrical grid becomes more interconnected every day, despite the greater risks from solar storms.

No lights. No computers. No cellphones. Even flushing a toilet or filling a car's gas tank would be impossible. And that's just for starters.

One of the best ways of protecting satellites and grids from space weather is to predict precisely where the most damaging force will hit. Operators could temporarily shut down a satellite or disconnect part of the grid. But progress on learning how to track damaging space weather has not kept pace with the exponential increase in technologies that could be damaged by it. And private satellite operators aren't collating and sharing information about how their electronics are withstanding space radiation, a practice that could help everyone protect their gear.

We have blithely built our civilization's critical infrastructure during a time when the planet's magnetic field was relatively strong, not accounting for the field's bent for anarchy. Not only is the field turbulent and ungovernable, but, at this point, it is unpredictable. It will have its way with us, no matter what we do. Our task is to figure out how to make it hurt as little as possible
The tricks propagandists use to beat science

A model of the way opinions spread reveals how propagandists use the scientific process against itself to secretly influence policy makers.

Together, tobacco companies and the PR firm created and funded an organization called the Tobacco Industry Research Committee to produce results and opinions that contradicted the view that smoking kills. This led to a false sense of uncertainty and delayed policy changes that would otherwise have restricted sales.

The approach was hugely successful for the tobacco industry at the time. In the same book, Oreskes and Conway show how a similar approach has influenced the climate change debate. Again, the scientific consensus is clear and unambiguous but the public debate has been deliberately muddied to create a sense of uncertainty. Indeed, Oreskes and Conway say that some of the same people who dreamt up the tobacco strategy also worked on undermining the climate change debate.

That raises an important question: How easy is it for malicious actors to distort the public perception of science?

Today we get an answer thanks to the work of James Owen Weatherall, Cailin O'Connor at the University of California, Irvine, and Justin Bruner at the Australian National University in Canberra, who have created a computer model of the way scientific consensus forms and how this influences the opinion of policy makers. The team studied how easily these views can be distorted and determined that today it is straightforward to distort the perception of science with techniques that are even more subtle than those used by the tobacco industry.
Recommended for You

The original tobacco strategy involved several lines of attack. One of these was to fund research that supported the industry and then publish only the results that fit the required narrative. "For instance, in 1954 the TIRC distributed a pamphlet entitled 'A Scientific Perspective on the Cigarette Controversy' to nearly 200,000 doctors, journalists, and policy-makers, in which they emphasized favorable research and questioned results supporting the contrary view," say Weatherall and co, who call this approach biased production.

A second approach promoted independent research that happened to support the tobacco industry's narrative. For example, it supported research into the link between asbestos and lung cancer because it muddied the waters by showing that other factors can cause cancer. Weatherall and his team call this approach selective sharing.

Weatherall and co investigated how these techniques influence public opinion. To do this they used a computer model of the way the scientific process influences the opinion of policy makers.

This model contains three types of actors. The first is scientists who come to a consensus by carrying out experiments and allowing the results, and those from their peers, to influence their view.

Each scientist begins with the goal of deciding which of two theories is better. One of these theories is based on "action A," which is well understood and known to work 50 percent of time. This corresponds to theory A.

By contrast, theory B is based on an action that is poorly understood. Scientists are uncertain about whether or not it is better than A. Nevertheless, the model is set up so that theory B is actually better.

Scientists can make observations using their theory, and crucially, these have probabilistic outcomes. So even if theory B is the better of the two, some results will back theory A.

At the beginning of the simulation, the scientists are given a random belief in theory A or B. For example, a scientist with a 0.7 credence believes there is a 70 percent chance that theory B is correct and therefore applies theory B in the next round of experiments.

After each round of experiments, the scientists update their views based on the results of their experiment and the results of scientists they are linked to in the network. In the next round, they repeat this process and update their beliefs again, and so on.

The simulation stops when all scientists believe one theory or the other or when the belief in one theory reaches some threshold level. In this way, Weatherall and co simulate the way that scientists come to a consensus view.

But how does this process influence policy makers? To find out, Weatherall and his team introduced a second group of people into the model--the policy makers--who are influenced by the scientists (but do not influence the scientists themselves). Crucially, policy makers do not listen to all the scientists, only a subset of them.

The policy makers start off with a view and update it after each round, using the opinions of the scientists they listen to.

But the key focus of the team's work is how a propagandist can influence the policy makers' views. So Weatherall and co introduce a third actor into this model. This propagandist observes all the scientists and communicates with all the policy makers with the goal of persuading them that the worse theory is correct (in this case, theory A). They do this by searching only for views that suggest theory A is correct and sharing these with the policy makers.

The propagandist can work in two ways that correspond to biased production or selective sharing. In the first, the propagandist uses an in-house team of scientists to produce results that favor theory B. In the second, the propagandist simply cherry-picks those results from independent scientists who favor theory B.

Both types of influence can make a big impact, say Weatherall and co--selective sharing turns out to be just as good as biased production. "We find that the presence of a single propagandist who communicates only actual findings of scientists can have a startling influence on the beliefs of policy makers," they explain. "Under many scenarios we find that while the community of scientists converges on true beliefs about the world, the policy makers reach near certainty in the false belief."

And that's without any fraudulent or bad science, just cherry-picking the results. Indeed, the propagandists don't even need to use their own in-house scientists to back specific ideas. When there is natural variation in the results of unbiased scientific experiments, the propagandists can have a significant influence by cherry-picking the ones that back their own agenda. And it can be done at very low risk because all the results they choose are "real" science.

That finding has important implications. It means that anybody who wants to manipulate public opinion and influence policy makers can achieve extraordinary success with relatively subtle tricks.

Indeed, it's not just nefarious actors who can end up influencing policy makers in ways that do not match the scientific consensus. Weatherall and co point out that science journalists also cherry-pick results. Reporters are generally under pressure to find the most interesting or sexy or amusing stories, and this biases what policy makers see. Just how significant this effect is in the real world isn't clear, however.

The team's key finding will have profound consequences. "One might have expected that actually producing biased science would have a stronger influence on public opinion than merely sharing others' results," say Weatherall and co. "But there are strong senses in which the less invasive, more subtle strategy of selective sharing is more effective than biased production."

The work has implications for the nature of science too. This kind of selective sharing is effective only because of the broad variation in results that emerge from certain kinds of experiment, particularly those that are small, low-powered studies.

This is a well-known problem, and the solution is clear: bigger, more highly powered studies. "Given some fixed financial resources, funding bodies should allocate those resources to a few very high-powered studies," argue Weatherall and co, who go on to suggest that scientists should be given incentives for producing that kind of work. "For instance, scientists should be granted more credit for statistically stronger results--even in cases where they happen to be null."

That would make it more difficult for propagandists to find spurious results they can use to distort views.

But given how powerful selective sharing seems to be, the question now is, who is likely to make effective use of Weatherall and co's conclusions first: propagandists or scientists/policy makers?
read the rest at the link
Politics and Current Events / r>g Piketty was right
More bracing still are the data's implications for debates on inequality. Karl Marx once reasoned that as capitalists piled up wealth, their investments would suffer diminishing returns and the pay-off from them would drop towards zero, eventually provoking destructive fights between industrial countries. That seems not to be true; returns on housing and equities remain high even though the stock of assets as a share of GDP has doubled since 1970. Gravity-defying returns might reflect new and productive uses for capital: firms deploying machines instead of people, for instance, or well-capitalised companies with relatively small numbers of employees taking over growing swathes of the economy. High returns on equity capital may therefore be linked to a more tenuous status for workers and to a drop in the share of GDP which is paid out as labour income.

Similarly, long-run returns provide support for the grand theory of inequality set out in 2013 by Thomas Piketty, a French economist, who suggested (based in part on his own data-gathering) that the rate of return on capital was typically higher than the growth rate of the economy. As a consequence, the stock of wealth should grow over time relative to GDP. And because wealth is less evenly distributed than income, this growth should push the economy towards ever higher levels of inequality. Mr Piketty summed up this contention in the pithy expression "r > g".

In fact, that may understate the case, according to the newly gathered figures. In most times and places, "r", which the authors calculate as the average return across all assets, both safe and risky, is well above "g" or GDP growth. Since 1870, they reckon, the average real return on wealth has been about 6% a year whereas real GDP growth has been roughly 3% a year on average. Only during the first and second world wars did rates of return drop much below growth rates. And in recent decades, the "great compression" in incomes and wealth that followed the world wars has come undone, as asset returns persistently outstrip the growth of the economy.

In such ways does the painstaking collection of data fundamentally reshape understanding of the way economies work. It is a shame that data-gathering does not carry higher status within the profession. It would raise the status of economics itself.
Attorney General Jeff Sessions has rescinded several Obama-era directives that discouraged enforcement of federal marijuana laws in states that had legalized the substance.

In a memo sent to U.S. attorneys Thursday, Sessions noted that federal law prohibits the possession and sale of marijuana, and he undid four previous Obama administration memos that advised against bringing weed prosecutions in states where it was legal to use for recreational or medical purposes. Sessions said prosecutors should use their own discretion -- taking into consideration the department's limited resources, the seriousness of the crime, and the deterrent effect that they could impose -- in weighing whether charges were appropriate.

"It is the mission of the Department of Justice to enforce the laws of the United States, and the previous issuance of guidance undermines the rule of law and the ability of our local, state, tribal, and federal law enforcement partners to carry out this mission," Sessions said in a statement. "Therefore, today's memo on federal marijuana enforcement simply directs all U.S. Attorneys to use previously established prosecutorial principles that provide them all the necessary tools to disrupt criminal organizations, tackle the growing drug crisis, and thwart violent crime across our country."

The move, first reported by the Associated Press, potentially paves the way for the federal government to crack down on the burgeoning pot industry -- though the precise impact remains to be seen. It also might spark something of a federalist crisis, and it drew some resistance even from members of Sessions's own party.

Computers and Technology / Brave browser
My son told me to use it and I just downloaded it and am posting from it. It loads noticeably faster than chrome. So far, that's the only thing I can say about it. It has a built in ad block which gets around the Android restriction.