Skip to main content

TR Memescape

  • Talk Rational: All are welcome, but not all will be welcomed!

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Testy Calibrate

Report: The Failure of Policy Planning in California's Charter School Facility Funding

By In the Public Interest

California has more charter schools than any other state in the nation, in large part because of generous public funding and subsidies to lease, build, or buy school buildings. But much of this public investment, hundreds of millions of dollars, has been misspent on schools that do not fulfill the intent of state charter school policy and undermine the financial viability of California's public school districts.

In the report, Spending Blind: The Failure of Policy Planning in California's Charter School Facility Funding, In the Public Interest reveals that a substantial portion of the more than $2.5 billion in tax dollars or taxpayer subsidized financing spent on California charter school facilities in the past 15 years has been misspent on: schools that underperformed nearby traditional public schools; schools built in districts that already had enough classroom space; schools that were found to have discriminatory enrollment policies; and in the worst cases, schools that engaged in unethical or corrupt practices.


The report's key findings include:

    Over the past 15 years, California charter schools have received more than $2.5 billion in tax dollars or taxpayer subsidized funds to lease, build, or buy school buildings.
    Nearly 450 charter schools have opened in places that already had enough classroom space for all students--and this overproduction of schools was made possible by generous public support, including $111 million in rent, lease, or mortgage payments picked up by taxpayers, $135 million in general obligation bonds, and $425 million in private investments subsidized with tax credits or tax exemptions.
    For three-quarters of California charter schools, the quality of education on offer--based on state and charter industry standards--is worse than that of a nearby traditional public school that serves a demographically similar population. Taxpayers have provided these schools with an estimated three-quarters of a billion dollars in direct funding and an additional $1.1 billion in taxpayer-subsidized financing.
    Even by the charter industry's standards, the worst charter schools receive generous facility funding. The California Charter Schools Association identified 161 charter schools that ranked in the bottom 10% of schools serving comparable populations last year, but even these schools received more than $200 million in tax dollars and tax-subsidized funding.
    At least 30% of charter schools were both opened in places that had no need for additional seats and also failed to provide an education superior to that available in nearby public schools. This number is almost certainly underestimated, but even at this rate, Californians provided these schools combined facilities funding of more than $750 million, at a net cost to taxpayers of nearly $400 million.
    Public facilities funding has been disproportionately concentrated among the less than one-third of schools that are owned by Charter Management Organizations (CMOs) that operate chains of between three and 30 schools. An even more disproportionate share of funding has been taken by just four large CMO chains--Aspire, KIPP, Alliance, and Animo/Green Dot.
    Since 2009, the 253 schools found by the American Civil Liberties Union of Southern California to maintain discriminatory enrollment policies have been awarded a collective $75 million under the SB740 program, $120 million in general obligation bonds, and $150 million in conduit bond financing.
    CMOs have used public tax dollars to buy private property. The Alliance College-Ready Public Schools network of charter schools, for instance, has benefited from more than $110 million in federal and state taxpayer support for its facilities, which are not owned by the public, but are part of a growing empire of privately owned Los Angeles-area real estate now worth in excess of $200 million.
you can download the whole report at the link.
Politics and Current Events / The Peter Theil thread
Peter Thiel, the Silicon Valley billionaire known for his libertarian politics, is leaving the Bay Area after four decades and stepping back from tech due in part to his dissatisfaction with the industry's liberal politics.

Citing what they called Silicon Valley's "sclerotic" culture and "conformity of thought," a source close to Thiel told CNNMoney that the investor would soon move to Los Angeles, where he will "focus on a number of new projects including creating a new media endeavor."

"L.A. is a better place to do that," the source said. "L.A. is also less out of touch and it's a better place to connect with the rest of the country."

Thiel will also move his investment firm Thiel Capital and his Thiel Foundation to Los Angeles, the source said. Founders Fund, where he is a general partner, will remain in San Francisco. Thiel is also considering resigning from the board of Facebook, according to the source.

Thiel, a co-founder of PayPal, has long been outspoken about his politics. But he became a lightning rod in the community and nationally after backing Donald Trump's presidential bid and then joining Trump's transition team.

The Wall Street Journal, which broke the news of Thiel's move, reported Thursday that Facebook CEO Mark Zuckerberg even asked Thiel to consider resigning from his company's board last summer. This came after Netflix CEO Reed Hastings, another Facebook board member, reportedly criticized what he called Thiel's "catastrophically bad judgment" in supporting Trump.

Since then, the source close to him said, Thiel has grown more aware of what he views as liberal intolerance.

"Peter thinks the same network effects that concentrate talent in the Valley (adding value) are leading to conformity of thought (limiting the generation of new ideas)," the source said in an email. "SF is still important but the most exciting future tech developments may come outside of it. Peter also thinks the Valley has become too mono-cultural and the cost of living is making the whole area more sclerotic, less vital."

The source would not provide any further detail on the new media endeavor Thiel plans to work on in Los Angeles, nor any of the other business opportunities he plans to pursue.

Objective To assess the prospective associations between consumption of ultra-processed food and risk of cancer.

Design Population based cohort study.

Setting and participants 104 980 participants aged at least 18 years (median age 42.8 years) from the French NutriNet-Santé cohort (2009-17). Dietary intakes were collected using repeated 24 hour dietary records, designed to register participants' usual consumption for 3300 different food items. These were categorised according to their degree of processing by the NOVA classification.

Main outcome measures Associations between ultra-processed food intake and risk of overall, breast, prostate, and colorectal cancer assessed by multivariable Cox proportional hazard models adjusted for known risk factors.

Results Ultra-processed food intake was associated with higher overall cancer risk (n=2228 cases; hazard ratio for a 10% increment in the proportion of ultra-processed food in the diet 1.12 (95% confidence interval 1.06 to 1.18); P for trend<0.001) and breast cancer risk (n=739 cases; hazard ratio 1.11 (1.02 to 1.22); P for trend=0.02). These results remained statistically significant after adjustment for several markers of the nutritional quality of the diet (lipid, sodium, and carbohydrate intakes and/or a Western pattern derived by principal component analysis).

Conclusions In this large prospective study, a 10% increase in the proportion of ultra-processed foods in the diet was associated with a significant increase of greater than 10% in risks of overall and breast cancer. Further studies are needed to better understand the relative effect of the various dimensions of processing (nutritional composition, food additives, contact materials, and neoformed contaminants) in these associations.
Computers and Technology / dynamic maps online
Has anyone ever used the ArcGIS web server? If so, have you had any problems or issues with limitations? (MartinM I'm looking at you here)
Politics and Current Events / dystopian realities
Imagine that this is your daily life: While on your way to work or on an errand, every 100 meters you pass a police blockhouse. Video cameras on street corners and lamp posts recognize your face and track your movements. At multiple checkpoints, police officers scan your ID card, your irises and the contents of your phone. At the supermarket or the bank, you are scanned again, your bags are X-rayed and an officer runs a wand over your body -- at least if you are from the wrong ethnic group. Members of the main group are usually waved through.

You have had to complete a survey about your ethnicity, your religious practices and your "cultural level"; about whether you have a passport, relatives or acquaintances abroad, and whether you know anyone who has ever been arrested or is a member of what the state calls a "special population."

This personal information, along with your biometric data, resides in a database tied to your ID number. The system crunches all of this into a composite score that ranks you as "safe," "normal" or "unsafe."Based on those categories, you may or may not be allowed to visit a museum, pass through certain neighborhoods, go to the mall, check into a hotel, rent an apartment, apply for a job or buy a train ticket. Or you may be detained to undergo re-education, like many thousands of other people.

A science-fiction dystopia? No.
The new therapies are based on research begun in the 1980s showing that people in the throes of a migraine attack have high levels of a protein called calcitonin gene-related peptide (CGRP) in their blood.

Step by step, researchers tracked and studied this neurochemical's effects. They found that injecting the peptide into the blood of people prone to migraines triggers migraine-like headaches, whereas people not prone to migraines experienced, at most, mild pain. Blocking transmission of CGRP in mice appeared to prevent migraine-like symptoms. And so a few companies started developing a pill that might do the same in humans.

Clinical trials of the first pills were effective against migraine but halted in 2011 over concerns about potential liver damage. So, four pharmaceutical companies rejiggered their approach. To bypass the liver, all four instead looked to an injectable therapy called monoclonal antibodies -- tiny immune molecules most commonly used to treat cancer. Not only do these bypass the liver to block CGRP, but one injection appears to be effective for up to three months with almost no noticeable side effects.

aside from the  :staregonk:  bolded part, wow. There is pretty much no pain like a migraine. I used to get them a lot but the condition seems to have passed for me. I know several chronic sufferers though.

ETA: I don't like this forum for this topic. We could use a health forum.
Science / Things to worry about
The Earth's magnetic field protects our planet from dangerous solar and cosmic rays, like a giant shield. As the poles switch places (or try to), that shield is weakened; scientists estimate that it could waste away to as little as a tenth of its usual force. The shield could be compromised for centuries while the poles move, allowing malevolent radiation closer to the surface of the planet for that whole time. Already, changes within the Earth have weakened the field over the South Atlantic so much that satellites exposed to the resulting radiation have experienced memory failure.

That radiation isn't hitting the surface yet. But at some point, when the magnetic field has dwindled enough, it could be a different story. Daniel Baker, director of the Laboratory for Atmospheric and Space Physics at the University of Colorado, Boulder, one of the world's experts on how cosmic radiation affects the Earth, fears that parts of the planet will become uninhabitable during a reversal. The dangers: devastating streams of particles from the sun, galactic cosmic rays, and enhanced ultraviolet B rays from a radiation-damaged ozone layer, to name just a few of the invisible forces that could harm or kill living creatures.

How bad could it be? Scientists have never established a link between previous pole reversals and catastrophes like mass extinctions. But the world of today is not the world of 780,000 years ago, when the poles last reversed, or even 40,000 years ago, when they tried to. Today, there are nearly 7.6 billion people on Earth, twice as many as in 1970. We have drastically changed the chemistry of the atmosphere and the ocean with our activities, impairing the life support system of the planet. Humans have built huge cities, industries and networks of roads, slicing up access to safer living spaces for many other creatures. We have pushed perhaps a third of all known species toward extinction and have imperiled the habitats of many more. Add cosmic and ultraviolet radiation to this mix, and the consequences for life on Earth could be ruinous.

And the perils are not just biological. The vast cyber-electric cocoon that has become the central processing system of modern civilization is in grave danger. Solar energetic particles can rip through the sensitive miniature electronics of the growing number of satellites circling the Earth, badly damaging them. The satellite timing systems that govern electric grids would be likely to fail. The grid's transformers could be torched en masse. Because grids are so tightly coupled with each other, failure would race across the globe, causing a domino run of blackouts that could last for decades.

 No lights. No computers. No cellphones. Even flushing a toilet or filling a car's gas tank would be impossible. And that's just for starters.

But these dangers are rarely considered by those whose job it is to protect the electronic pulse of civilization. More satellites are being put into orbit with more highly miniaturized (and therefore more vulnerable) electronics. The electrical grid becomes more interconnected every day, despite the greater risks from solar storms.

No lights. No computers. No cellphones. Even flushing a toilet or filling a car's gas tank would be impossible. And that's just for starters.

One of the best ways of protecting satellites and grids from space weather is to predict precisely where the most damaging force will hit. Operators could temporarily shut down a satellite or disconnect part of the grid. But progress on learning how to track damaging space weather has not kept pace with the exponential increase in technologies that could be damaged by it. And private satellite operators aren't collating and sharing information about how their electronics are withstanding space radiation, a practice that could help everyone protect their gear.

We have blithely built our civilization's critical infrastructure during a time when the planet's magnetic field was relatively strong, not accounting for the field's bent for anarchy. Not only is the field turbulent and ungovernable, but, at this point, it is unpredictable. It will have its way with us, no matter what we do. Our task is to figure out how to make it hurt as little as possible
The tricks propagandists use to beat science

A model of the way opinions spread reveals how propagandists use the scientific process against itself to secretly influence policy makers.

Together, tobacco companies and the PR firm created and funded an organization called the Tobacco Industry Research Committee to produce results and opinions that contradicted the view that smoking kills. This led to a false sense of uncertainty and delayed policy changes that would otherwise have restricted sales.

The approach was hugely successful for the tobacco industry at the time. In the same book, Oreskes and Conway show how a similar approach has influenced the climate change debate. Again, the scientific consensus is clear and unambiguous but the public debate has been deliberately muddied to create a sense of uncertainty. Indeed, Oreskes and Conway say that some of the same people who dreamt up the tobacco strategy also worked on undermining the climate change debate.

That raises an important question: How easy is it for malicious actors to distort the public perception of science?

Today we get an answer thanks to the work of James Owen Weatherall, Cailin O'Connor at the University of California, Irvine, and Justin Bruner at the Australian National University in Canberra, who have created a computer model of the way scientific consensus forms and how this influences the opinion of policy makers. The team studied how easily these views can be distorted and determined that today it is straightforward to distort the perception of science with techniques that are even more subtle than those used by the tobacco industry.
Recommended for You

The original tobacco strategy involved several lines of attack. One of these was to fund research that supported the industry and then publish only the results that fit the required narrative. "For instance, in 1954 the TIRC distributed a pamphlet entitled 'A Scientific Perspective on the Cigarette Controversy' to nearly 200,000 doctors, journalists, and policy-makers, in which they emphasized favorable research and questioned results supporting the contrary view," say Weatherall and co, who call this approach biased production.

A second approach promoted independent research that happened to support the tobacco industry's narrative. For example, it supported research into the link between asbestos and lung cancer because it muddied the waters by showing that other factors can cause cancer. Weatherall and his team call this approach selective sharing.

Weatherall and co investigated how these techniques influence public opinion. To do this they used a computer model of the way the scientific process influences the opinion of policy makers.

This model contains three types of actors. The first is scientists who come to a consensus by carrying out experiments and allowing the results, and those from their peers, to influence their view.

Each scientist begins with the goal of deciding which of two theories is better. One of these theories is based on "action A," which is well understood and known to work 50 percent of time. This corresponds to theory A.

By contrast, theory B is based on an action that is poorly understood. Scientists are uncertain about whether or not it is better than A. Nevertheless, the model is set up so that theory B is actually better.

Scientists can make observations using their theory, and crucially, these have probabilistic outcomes. So even if theory B is the better of the two, some results will back theory A.

At the beginning of the simulation, the scientists are given a random belief in theory A or B. For example, a scientist with a 0.7 credence believes there is a 70 percent chance that theory B is correct and therefore applies theory B in the next round of experiments.

After each round of experiments, the scientists update their views based on the results of their experiment and the results of scientists they are linked to in the network. In the next round, they repeat this process and update their beliefs again, and so on.

The simulation stops when all scientists believe one theory or the other or when the belief in one theory reaches some threshold level. In this way, Weatherall and co simulate the way that scientists come to a consensus view.

But how does this process influence policy makers? To find out, Weatherall and his team introduced a second group of people into the model--the policy makers--who are influenced by the scientists (but do not influence the scientists themselves). Crucially, policy makers do not listen to all the scientists, only a subset of them.

The policy makers start off with a view and update it after each round, using the opinions of the scientists they listen to.

But the key focus of the team's work is how a propagandist can influence the policy makers' views. So Weatherall and co introduce a third actor into this model. This propagandist observes all the scientists and communicates with all the policy makers with the goal of persuading them that the worse theory is correct (in this case, theory A). They do this by searching only for views that suggest theory A is correct and sharing these with the policy makers.

The propagandist can work in two ways that correspond to biased production or selective sharing. In the first, the propagandist uses an in-house team of scientists to produce results that favor theory B. In the second, the propagandist simply cherry-picks those results from independent scientists who favor theory B.

Both types of influence can make a big impact, say Weatherall and co--selective sharing turns out to be just as good as biased production. "We find that the presence of a single propagandist who communicates only actual findings of scientists can have a startling influence on the beliefs of policy makers," they explain. "Under many scenarios we find that while the community of scientists converges on true beliefs about the world, the policy makers reach near certainty in the false belief."

And that's without any fraudulent or bad science, just cherry-picking the results. Indeed, the propagandists don't even need to use their own in-house scientists to back specific ideas. When there is natural variation in the results of unbiased scientific experiments, the propagandists can have a significant influence by cherry-picking the ones that back their own agenda. And it can be done at very low risk because all the results they choose are "real" science.

That finding has important implications. It means that anybody who wants to manipulate public opinion and influence policy makers can achieve extraordinary success with relatively subtle tricks.

Indeed, it's not just nefarious actors who can end up influencing policy makers in ways that do not match the scientific consensus. Weatherall and co point out that science journalists also cherry-pick results. Reporters are generally under pressure to find the most interesting or sexy or amusing stories, and this biases what policy makers see. Just how significant this effect is in the real world isn't clear, however.

The team's key finding will have profound consequences. "One might have expected that actually producing biased science would have a stronger influence on public opinion than merely sharing others' results," say Weatherall and co. "But there are strong senses in which the less invasive, more subtle strategy of selective sharing is more effective than biased production."

The work has implications for the nature of science too. This kind of selective sharing is effective only because of the broad variation in results that emerge from certain kinds of experiment, particularly those that are small, low-powered studies.

This is a well-known problem, and the solution is clear: bigger, more highly powered studies. "Given some fixed financial resources, funding bodies should allocate those resources to a few very high-powered studies," argue Weatherall and co, who go on to suggest that scientists should be given incentives for producing that kind of work. "For instance, scientists should be granted more credit for statistically stronger results--even in cases where they happen to be null."

That would make it more difficult for propagandists to find spurious results they can use to distort views.

But given how powerful selective sharing seems to be, the question now is, who is likely to make effective use of Weatherall and co's conclusions first: propagandists or scientists/policy makers?
read the rest at the link
Politics and Current Events / r>g Piketty was right
More bracing still are the data's implications for debates on inequality. Karl Marx once reasoned that as capitalists piled up wealth, their investments would suffer diminishing returns and the pay-off from them would drop towards zero, eventually provoking destructive fights between industrial countries. That seems not to be true; returns on housing and equities remain high even though the stock of assets as a share of GDP has doubled since 1970. Gravity-defying returns might reflect new and productive uses for capital: firms deploying machines instead of people, for instance, or well-capitalised companies with relatively small numbers of employees taking over growing swathes of the economy. High returns on equity capital may therefore be linked to a more tenuous status for workers and to a drop in the share of GDP which is paid out as labour income.

Similarly, long-run returns provide support for the grand theory of inequality set out in 2013 by Thomas Piketty, a French economist, who suggested (based in part on his own data-gathering) that the rate of return on capital was typically higher than the growth rate of the economy. As a consequence, the stock of wealth should grow over time relative to GDP. And because wealth is less evenly distributed than income, this growth should push the economy towards ever higher levels of inequality. Mr Piketty summed up this contention in the pithy expression "r > g".

In fact, that may understate the case, according to the newly gathered figures. In most times and places, "r", which the authors calculate as the average return across all assets, both safe and risky, is well above "g" or GDP growth. Since 1870, they reckon, the average real return on wealth has been about 6% a year whereas real GDP growth has been roughly 3% a year on average. Only during the first and second world wars did rates of return drop much below growth rates. And in recent decades, the "great compression" in incomes and wealth that followed the world wars has come undone, as asset returns persistently outstrip the growth of the economy.

In such ways does the painstaking collection of data fundamentally reshape understanding of the way economies work. It is a shame that data-gathering does not carry higher status within the profession. It would raise the status of economics itself.
Attorney General Jeff Sessions has rescinded several Obama-era directives that discouraged enforcement of federal marijuana laws in states that had legalized the substance.

In a memo sent to U.S. attorneys Thursday, Sessions noted that federal law prohibits the possession and sale of marijuana, and he undid four previous Obama administration memos that advised against bringing weed prosecutions in states where it was legal to use for recreational or medical purposes. Sessions said prosecutors should use their own discretion -- taking into consideration the department's limited resources, the seriousness of the crime, and the deterrent effect that they could impose -- in weighing whether charges were appropriate.

"It is the mission of the Department of Justice to enforce the laws of the United States, and the previous issuance of guidance undermines the rule of law and the ability of our local, state, tribal, and federal law enforcement partners to carry out this mission," Sessions said in a statement. "Therefore, today's memo on federal marijuana enforcement simply directs all U.S. Attorneys to use previously established prosecutorial principles that provide them all the necessary tools to disrupt criminal organizations, tackle the growing drug crisis, and thwart violent crime across our country."

The move, first reported by the Associated Press, potentially paves the way for the federal government to crack down on the burgeoning pot industry -- though the precise impact remains to be seen. It also might spark something of a federalist crisis, and it drew some resistance even from members of Sessions's own party.

Computers and Technology / Brave browser
My son told me to use it and I just downloaded it and am posting from it. It loads noticeably faster than chrome. So far, that's the only thing I can say about it. It has a built in ad block which gets around the Android restriction.
OKANOGAN COUNTY, Wash. - Officials with the Naval Air Station Whidbey Island said one of their aircraft was involved in the obscene skywritings spotted in Okanogan County.

Photos sent to KREM 2 by multiple sources show skydrawings of what some people are saying is male genitalia. Some sources have even tweeted pictures of what they saw.

A mother who lives in Okanogan who took pictures of the drawings reached out to KREM 2 to complain about the images, saying she was upset she might have to explain to her young children what the drawings were.

In a statement to KREM 2 News navy officials said, "The Navy holds its aircrew to the highest standards and we find this absolutely unacceptable, of zero training value and we are holding the crew accountable."
KREM 2 spoke to the Federal Aviation Administration to get some information about who may have made the drawings. FAA officials said unless the act poses a safety risk, there is nothing they can do about. The official said they "cannot police morality."
My mom is pretty old and has a lot of trouble now with computers. Thing is, she used them intensively throughout her professional career, beginning, admittedly, with IBM punch cards to run her programs, but she was still using computers intensively after y2k so it's not like she didn't encounter that sort of complexity.

I wonder how it will be different
Science / Flowers for Algernon
Human Mini-Brains Growing Inside Rat Bodies Are Starting to Integrate
"We are entering totally new ground here."

Stem cell technology has advanced so much that scientists can grow miniature versions of human brains -- called organoids, or mini-brains if you want to be cute about it -- in the lab, but medical ethicists are concerned about recent developments in this field involving the growth of these tiny brains in other animals. Those concerns are bound to become more serious after the annual meeting of the Society for Neuroscience starting November 11 in Washington, D.C., where two teams of scientists plan to present previously unpublished research on the unexpected interaction between human mini-brains and their rat and mouse hosts.

In the new papers, according to STAT, scientists will report that the organoids survived for extended periods of time -- two months in one case -- and even connected to lab animals' circulatory and nervous systems, transferring blood and nerve signals between the host animal and the implanted human cells. This is an unprecedented advancement for mini-brain research.

"We are entering totally new ground here," Christof Koch, president of the Allen Institute for Brain Science in Seattle, told STAT. "The science is advancing so rapidly, the ethics can't keep up."
DMT receptors in mini-brain organoids.

That mini-brains can even be grown in the lab is a huge advancement in the first place, as they have many of the same characteristics as living human brains that are in the early stages of development. Though they're not "alive" in the same sense that you and I are, they grow and are organized into different layers like our brains are. They even react in similar ways to stimuli like psychedelic drugs. Organoids are poised to revolutionize research on the human brain since scientists can perform tests on them that would be unethical to attempt on living humans.

Scientists have debated whether these brains are "conscious," but the fact that they could be successfully implanted in lab animals raises a whole new set of ethical concerns for the researchers who work with them. One of the major concerns in the mini-brain scenario is that these organoids could grow to more advanced levels within lab animals, making the debate about mini-brain consciousness much more urgent.

Putting human brain structures into non-human animals creates a thorny ethical area that raises people's fears about medical research going too far into unfamiliar territory -- and too quickly. It's likely to be a recurring theme in this field, too. In January, Salk Institute researchers developed human-pig chimeras, creating the possibility that pigs with human brain cells might also develop human consciousness.

STAT also reports that a third lab, in addition to the two presenting at the Society for Neuroscience meeting, has successfully connected human brain organoids to blood vessels. This attempt veered into such challenging ethical territory, though, that the lab reportedly paused its efforts.
Politics and Current Events / bus - sub
Starting today, you can now take a driverless bus down the Las Vegas strip
    Navy ARMA's driverless bus will begin shuttling passengers along a half-mile route in downtown Las Vegas.
    The bus has driven Las Vegas's streets before -- but this time, it will navigate along a route filled with other traffic.

The future is headed to Las Vegas in the form of a driverless, electric bus that will begin shuttling passengers along a half-mile downtown route beginning Wednesday morning at 10 a.m. The autonomous buses are the brainchild of French autonomous vehicle company Navya ARMA, which is teaming up with AAA  and Keolis, the company that owns and operates the shuttle, to gauge passenger enthusiasm for the product.

It isn't the first time the driverless shuttle has hit Las Vegas's streets. Back in January, the bus took a ten-day test drive along an empty route that had been cordoned off from the rest of traffic. Now, the bus will intermix with regular traffic, communicating with a series of wireless sensors installed into traffic signals along its journey.

Self-driving bus crashes two hours after launch in Las Vegas

The bus was touted as the United States' first self-driving shuttle project for the public before it hit a semi-truck.

 A driverless shuttle bus crashed less than two hours after it was launched in Las Vegas on Wednesday.

The city's officials had been hosting an unveiling ceremony for the bus, described as the US' first self-driving shuttle pilot project geared towards the public, before it crashed with a semi-truck.

According to the Las Vegas Review-Journal, the human driver of the other vehicle was at fault, there were no injuries, and the incident caused minor damage.

The oval-shaped shuttle -- sponsored by AAA, the Review-Journal added -- can transport up to 12 passengers at a time. It has an attendant and a computer monitor, and uses GPS and electric curb sensors instead of brakes or a steering wheel.

The crash follows the US House passing the Self Drive Act in September, which if passed by the Senate would exempt car manufacturers from various federal and state regulations, allowing for the eventual deployment of up to 100,000 test vehicles a year.

Under the Act, states would still decide whether or not to permit self-driving cars on their roads. However, the federal government could permit a car manufacturer to bypass certain federal safety rules, as well as some state regulations.
General Discussion / lost at sea for 5 months

Ms. Appel, who said she had lived in Hawaii for 10 years, began planning the trip two and a half years ago out of a desire to further explore the South Pacific. But after their engine flooded, the plan went awry. Ms. Appel and Ms. Fuiaba at first believed they could get to their destination using only the boat's sails. But two months into a journey that ordinarily takes half that long, they began to issue daily distress calls using a high-frequency radio.

For 98 days, no one answered.

"It was very depressing and very hopeless," Ms. Appel said. Their boat had been too far out of range to communicate with anyone either on land or at sea. "There is true humility to wondering if today is your last day, if tonight is your last night."
In this photograph released by the Navy, Zeus, one of the women's pet dogs, was helped aboard the Ashland on Wednesday. Credit MCS3 Jonathan Clay/United States Navy

In an interview with The Associated Press, Ms. Appel's mother, Joyce Appel, said she had called the Coast Guard after being unable to reach her daughter a week and a half into the trip. By that point, Ms. Appel's mobile phone had fallen overboard.

A Coast Guard search-and-rescue mission turned up empty, but Ms. Appel's mother still believed her daughter would return. "I had hope all along, she is very resourceful and she's curious and as things break, she tries to repair them," she told The Associated Press. "She doesn't sit and wait for the repairman to get there, so I knew the same thing would be true of the boat."

The Ashland, a ready-response vessel that operates out of Sasebo, Japan, was alerted to the location of Ms. Appel and Ms. Fuiaba's boat by a Taiwanese fishing vessel on Oct. 24, according to the Navy. The next morning, nearly half a year of anguished uncertainty came to an end.

"They saved our lives," Ms. Appel said in the Navy statement. She described the feeling of seeing a ship on the horizon as "pure relief."

Ms. Appel and Ms. Fuiaba are currently aboard the Ashland, where they will remain until its next port of call. Their boat, deemed unseaworthy, was left behind.
you'd think it would be considered seaworthy after all that
Politics and Current Events / A centrist future
Swiped from ratskep

It's weird how sometimes what used to appear corny suddenly appears disturbing. It's well done though imo.

But it might not matter:
Arts and Entertainment / Hey nineteen
Walter Becker, guitarist, bassist and co-founder of the Rock and Roll Hall of Fame-inducted band Steely Dan, died Sunday at the age of 67.

"I intend to keep the music we created together alive as long as I can with the Steely Dan band," Fagen promises of Becker's legacy

Becker's official site announced the death; no cause of death or other details were provided.

"Walter Becker was my friend, my writing partner and my bandmate since we met as students at Bard College in 1967," Donald Fagen wrote in a tribute to Becker. "He was smart as a whip, an excellent guitarist and a great songwriter. He was cynical about human nature, including his own, and hysterically funny."

Becker missed Steely Dan's Classic East and West concerts in July as he recovered from an unspecified ailment. "Walter's recovering from a procedure and hopefully he'll be fine very soon," Fagen told Billboard at the time. Becker's doctor advised the guitarist not to leave his Maui home for the performances.
Bob: "I can can I I everything else."

Alice: "Balls have zero to me to me to me to me to me to me to me to me to."

To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency-and perhaps, hidden nuance-than you or I ever could? Because it is.

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they'd made a mistake in programming.

"There was no reward to sticking to English language," says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal-a very effective bit of AI vs. AI dogfighting researchers have dubbed a "generative adversarial network"-neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

"Agents will drift off understandable language and invent codewords for themselves," says Batra, speaking to a now-predictable phenomenon that's been observed again, and again, and again. "Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."

[Screenshot: courtesy Facebook]
Indeed. Humans have developed unique dialects for everything from trading pork bellies on the floor of the Mercantile Exchange to hunting down terrorists as Seal Team Six-simply because humans sometimes perform better by not abiding to normal language conventions.

So should we let our software do the same thing? Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.
We Teach Bots To Talk, But We'll Never Learn Their Language

Facebook ultimately opted to require its negotiation bots to speak in plain old English. "Our interest was having bots who could talk to people," says Mike Lewis, research scientist at FAIR. Facebook isn't alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They're the next wave of user interface, like the mouse and keyboard for the AI era.

The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. "It's important to remember, there aren't bilingual speakers of AI and human languages," says Batra. We already don't generally understand how complex AIs think because we can't really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse.

But at the same time, it feels shortsighted, doesn't it? If we can build software that can speak to other software more efficiently, shouldn't we use that? Couldn't there be some benefit?

[Source Images: Nikiteev_Konstantin/iStock, Zozulinskyi/iStock]
Because, again, we absolutely can lead machines to develop their own languages. Facebook has three published papers proving it. "It's definitely possible, it's possible that [language] can be compressed, not just to save characters, but compressed to a form that it could express a sophisticated thought," says Batra. Machines can converse with any baseline building blocks they're offered. That might start with human vocabulary, as with Facebook's negotiation bots. Or it could start with numbers, or binary codes. But as machines develop meanings, these symbols become "tokens"-they're imbued with rich meanings. As Dauphin points out, machines might not think as you or I do, but tokens allow them to exchange incredibly complex thoughts through the simplest of symbols. The way I think about it is with algebra: If A + B = C, the "A" could encapsulate almost anything. But to a computer, what "A" can mean is so much bigger than what that "A" can mean to a person, because computers have no outright limit on processing power.

"It's perfectly possible for a special token to mean a very complicated thought," says Batra. "The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it's because we have a limit to cognition." Computers don't need to simplify concepts. They have the raw horsepower to process them.
Why We Should Let Bots Gossip

But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn't get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase-because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer's language. As one coder put it to me, "Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning."

[Source Images: Nikiteev_Konstantin/iStock, Zozulinskyi/iStock]
In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they'd be predisposed to have a better understanding of the words we speak.

As one insider at a major AI technology company told me: No, his company wasn't actively interested in AIs that generated their own custom languages. But if it were, the greatest advantage he imagined was that it could conceivably allow software, apps, and services to learn to speak to each other without human intervention.

Right now, companies like Apple have to build APIs-basically a software bridge-involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our "smart devices" could learn to interoperate, no API required.

Given that our connected age has been a bit of a disappointment, given that the internet of things is mostly a joke, given that it's no easier to get a document from your Android phone onto your LG TV than it was 10 years ago, maybe there is something to the idea of letting the AIs of our world just talk it out on our behalf. Because our corporations can't seem to decide on anything. But these adversarial networks? They get things done.

If the self is socially constructed as many sociologists think it is, a product of language even, is real AI far away? Evolutionary algorithms are some scary ass shit sometimes.

Politics and Current Events / The Kush Thread
I heard his statement on the radio today and it's time for his own thread. It'll fill. Don't worry.
I'm totally conflicted here. If he wins on appeal because it turns out the CIA was just pulling a Venezuela trick on behalf of the global corporations it works for then he becomes a cult of personality and moves toward authoritarianism to deal with the CIA factions which sucks. If it turns out that he is actually guilty or if nothing comes of his appeal, then Brazil goes back to the bad old days. Anyway, here's the story:
Ex-Brazil President Lula sentenced to nearly 10 years for corruption

Brad Brooks

BRASILIA (Reuters) - Former Brazilian leader Luiz Inacio Lula da Silva, a top contender to win next year's presidential election, was convicted on corruption charges on Wednesday and sentenced to nearly 10 years in prison.

The ruling marked a stunning fall for Lula, who will remain free on appeal, and a serious blow to his chances of a political comeback.

Lula was Brazil's first working-class president and remains a popular figure among voters after he left office six years ago with an 83-percent approval rating. The former union leader won global admiration for transformative social policies that helped reduce stinging inequality in Latin America's biggest country.

The verdict represented the highest-profile conviction yet in the sweeping corruption investigation that for over three years has rattled Brazil, revealing a sprawling system of graft at top levels of business and government and throwing the country's political system into disarray.

Judge Sergio Moro found Lula guilty of accepting 3.7 million reais ($1.2 million) worth of bribes from engineering firm OAS SA, the amount prosecutors said the company spent refurbishing a beach apartment for Lula in return for his help winning contracts with state oil company Petroleo Brasileiro (PETR4.SA).

Federal prosecutors have accused Lula, who first took the presidency in 2003, of masterminding a long-running corruption scheme that was uncovered in a probe into kickbacks around Petrobras.

Lula's legal team has previously said they would appeal any guilty ruling. They have continuously blasted the trial as a partisan witchhunt, accusing Moro of being biased and out to get Lula for political reasons.

Moro has denied the accusations.

Lula's lawyers did not immediately respond to requests for comment.

Senator Gleisi Hoffmann, the head of the Workers Party, lashed out at the ruling, saying Lula was convicted to prevent him from running for the presidency next year. She said the party would protest the decision and was confident the ruling would be overturned on appeal.

The Brazilian real BRBY extended gains following Moro's decision and reached its strongest in two months. The benchmark Bovespa stock index .BVSP rose to a session high. Investors fear that another Lula presidency would mean a return to more state-directed and less business friendly economic policies.

"Power Vacuum on Left"
Lula would be barred from office if his guilty verdict is upheld by an appeals court, which is expected to take at least eight months to rule.

If he cannot run, political analysts say Brazil's left would be thrown into disarray, forced to rebuild and somehow find a leader who can emerge from the immense shadow that Lula has cast on Brazilian politics for three decades.

"Lula's absence opens a gaping hole in the political scene, it creates an enormous power vacuum on the left," said Claudio Couto, a political scientist at the Getulio Vargas Foundation, a top university. "We have now entered a situation of extreme political tension, even beyond the chaos we have been living for the last year."

Couto said he expected Lula's guilty verdict to be upheld by the appeals court. That would leave the 2018 presidential race wide open and raise chances of a victory by a political outsider, given most known contenders are also ensnared in Brazil's corruption investigations.
Boom to Bust

Lula's two-terms were marked by a commodity boom that momentarily made Brazil one of the world's fastest-growing economies. His ambitious foreign policies, aligning Brazil with other big developing nations, raised the country's profile on the global stage.

With Lula's swagger setting the tone, Brazil sought to shrug off northern economic and political hegemony and engage in global problems, like Middle East peace and the standoff over Iran's nuclear program.

Former U.S. President Barack Obama once labeled him the most popular politician on earth.

But upon leaving office and managing to get his hand-selected successor Dilma Rousseff elected, Brazil's economy soured, with the nation just now beginning to emerge from its worst recession on record.

Rousseff was impeached last year for breaking budgetary rules. She and her backers say her ouster was actually a 'coup' orchestrated by her vice president and now President Michel Temer, who himself faces corruption charges.

During his trial, Lula gave five hours of fiery and defiant defense, proclaiming his innocence and saying that it was politics and not the pilfering of public funds that put him on trial.

"But what is happening is not getting me down, just motivating me to go out and talk more," Lula said in his testimony. "I will keep fighting."

($1 = 3.22 reais)
Marijuana Shortage Prompts Emergency In Nevada; Tax Officials Weigh Changes

Sales of recreational marijuana have blown past expectations in Nevada, threatening to leave some dispensaries with empty shelves. After Gov. Brian Sandoval endorsed a statement of emergency in the first week of legal sales, regulators are looking to bolster the supply chain.

The Nevada Tax Commission is meeting Thursday to determine whether the state has enough wholesale marijuana distributors; it could also adopt emergency regulations.

"Right now, only companies that are also licensed to distribute liquor in Nevada are able to bring marijuana to dispensaries," Nevada Public Radio's Casey Morell reports for NPR's Newscast unit. "The dispensaries say that's why they're running out of the drug."

Nevada opened the retail pot market on July 1. The state has 47 licensed stores, and in the first weekend of sales, "well over 40,000 retail transactions" were carried out, tax officials say. Some retailers said they racked up twice as many sales as they had estimated -- and they also reported a dire need for new deliveries to restock their shelves.

At least seven wholesale liquor dealers have applied to become marijuana distributors -- but the tax department has said that as of July 5, "no wholesale liquor dealer has met the application requirements to receive a marijuana distributor license."

The situation has left some stores "running on fumes," Nevada Dispensary Association President Andrew Jolley told the Associated Press on Tuesday.

Nevada tax officials expect pot sales to generate $100 million in revenue over the next two years. In the agency's statement of emergency, the Taxation Department's executive director, Deonne Contine, said changes are needed both to prevent marijuana customers from reverting to the black market and to support the new businesses that have sprung up around legal recreational sales, opening shops and hiring workers.

"Unless the issue with distributor licensing is resolved quickly, the inability to deliver product to the retail stores will result in many of these employees losing their jobs and will cause this nascent industry to grind to a halt," Contine wrote, in a statement that was endorsed by Sandoval.