Energy consumption by indoor cannabis farms will soon rival that of data centers.
What’s the carbon cost of legal marijuana?
It turns out that every little joint and edible adds up. A new report finds that marijuana cultivation accounts for as much as 1 percent of energy use in states such as Colorado and Washington. The electricity needed to illuminate, dehumidify, and air-condition large growing operations may soon rival the expenditures from big data centers, which themselves emit an estimated 100 million metric tons of carbon into the atmosphere every year.
The marijuana industry’s energy use “is immense,” said the report’s author, Kelly Crandall, an analyst for EQ Research, a clean energy policy research institute. Her report found that a large grow operation can have energy expenditures of 2,000 watts per square meter because of its constant need for lighting and ventilation.
The carbon cost of cannabis is likely to grow. In November nine states will vote on marijuana legalization, including California, which could become the biggest player in the legal marijuana industry.
Crandall, who started studying the issue a few years ago while working as the energy strategy coordinator for the city of Boulder, Colorado, said she was surprised “by the magnitude of the industry and its utility bills.” She said she was also struck by how hard it was for the industry to switch to energy-efficient options. “I find it kind of a conundrum that it’s a very cash-rich industry, but because of banking restrictions it also has a difficulty investing in solar and efficiency.” Because marijuana cultivation is still a criminal offense under federal law, most banks will not do business with the industry even in states where it is legal.
Stephen Jensen, president of Green Barn Farms in Addy, Washington, acknowledges that financing “is a big problem.” He added that the marijuana industry is in many ways still learning to do business in a new legal framework, which has slowed its adoption of energy-efficient technologies. “Most of the growers that have converted from this new legal world have come from the indoor space,” he said, which means they are transitioning from working under the radar to operating legally. “That’s what they know and what the industry knows.”
Jensen said his pot cooperative has spent the past two years learning more about growing outdoors, which has allowed it to achieve dramatically lower electricity costs than many other growers. Now it just uses one building to host mother plants and cloning, while the rest is grown in “sun-powered” greenhouses. He said the energy costs average between $1,250 and $1,500 a month, compared with $25,000 to $40,000 for equivalent indoor growing operations in Washington.
Other parts of the industry are also adapting. A program called Certified Kind, based in Eugene, Oregon, offers growers an alternative to the “organic” label, which they are not permitted to use by federal law. The certification not only requires growers to forgo the use of pesticides but also has strict guidelines for energy use and requires growers to conduct energy audits.
In January, Humboldt County, home to a multibillion-dollar marijuana industry, became the first county in California to regulate cannabis cultivation. The board of supervisors gave growers until the end of the year to register and obtain permits that govern their use of water, energy, and rodenticides.
Crandall said her report pulled from utility filings, interviews, and other published information, although primary information and current research into grow centers’ energy costs was hard to find. “It’s pretty difficult to get people to comment on the record about this sort of thing,” she said.
She also found a dearth of other published research about the industry’s energy consumption. The only real prior study appears to have been published in 2012, before states such as Colorado, Oregon, and Washington voted to legalize cannabis. That study estimated that the industry’s energy expenditures at the time were $6 billion per year.
Andrew Black, certification director for Certified Kind, said the legal cannabis industry is evolving quickly, which will allow it to become more energy efficient. “Data that was never collected and analyzed due to cannabis prohibition is starting to come to light,” he said. “As normal business practices take root in the cannabis community, and especially as the sale price of cannabis drops, I think you will see a concerted effort toward finding the most economically and energy-efficient way to grow the crop.” Both Jensen and Black said they see the future of the industry in outdoor cultivation, not in inefficient warehouse grows.
Meanwhile, local utilities are just starting to look into ways to work with growers. Earlier in the year Washington’s Puget Sound Energy gave a grower called Trail Blazin’ Productions a $152,000 rebate after the organization invested in LED lighting, which uses less energy and produces less heat.
Crandall said the point of her report was not to focus solely on the magnitude of the marijuana industry’s energy use but to point out ways it could begin to collaborate with utilities and other organizations to reduce energy waste. “A lot of my recommendations involve collaborations among types of entities that may not necessarily have worked together in this way,” she said. “I think utilities don’t quite know how to reach out to an industry like this yet, and the industry doesn’t 100 percent know what their options are.”
She said she hopes her report brings the issue of cannabis’ energy expenditures into the light: “I don’t think anyone wants to discourage energy use, but they do want to discourage wasteful energy use and help people get more opportunities for clean energy.”
Article Disclaimer: This article was originally published on TakePart. Reprinted with permission. John R. Platt covers the environment, technology, philanthropy and more forScientific American, Conservation, Lion and other publications.
Before Ahmad Khan Rahami planted bombs in New York and New Jersey, he bought bomb-making materials on eBay, linked to jihad-related videos from his public social-media account and was looked into by law enforcement agents, according to the Federal Bureau of Investigation.
If only the authorities had connected the dots.
That challenge — mining billions of bits of information and crunching the data to find crucial clues — is behind a push by U.S. intelligence and law enforcement agencies to harness “big data” to predict crimes, terrorist acts and social upheaval before they happen. The market for such “predictive analytics” technology is estimated to reach $9.2 billion by 2020, up from $3 billion in 2015, according to research firm MarketsandMarkets.
It’s the stuff of a science-fiction movie like “Minority Report,” in which Tom Cruise played a Washington cop who used technology to arrest people before they carried out crimes. It’s also a red flag for privacy advocates already fighting U.S. spy programs exposed by Edward Snowden and the FBI’s demands that Apple Inc. help it hack into encrypted mobile phones.
The idea is to make sense of the vast and disparate streams of data from sources including social media, GPS devices, video feeds from street cameras and license-plate readers, travel and credit-card records and the news media, as well as government and propriety systems.
“Data is going to be the fundamental fuel for national security in this century,” William Roper, director of the Defense Department’s strategic capabilities office, said at a conference in Washington last month.
For the first time, the White House released a strategic plan on Wednesday to advance research and development of artificial intelligence technology, including to predict incidents that may be dangerous to public safety.
Weeks before Rahami allegedly carried out the attacks in September, he bought circuit boards, electric igniters and ball bearings — all of which are known bomb-making materials, according to charging documents from the FBI.
In previous years, he was flagged by U.S. Customs and Border Protection and the FBI after he made trips to Pakistan and after his father told police he was a terrorist, before recanting the remark.
Law enforcement agents could have been tipped off that Rahami was moving toward an attack had all of those data points been culled together in one place, said Mark Testoni, chief executive officer and president of SAP National Security Services Inc., a U.S.-based subsidiary of German software company SAP SE.
“This is a big data world now,” said Testoni. He said his company has developed a computer platform for doing predictive analytics that is being used in a limited way by a Defense Department agency and by a national security agency. He declined to name the government customers or specify what they are doing.
The technology to predict events is only in its infancy, Testoni said. National security and law enforcement agencies also have different rules when it comes to obtaining and using data, meaning there are walls between what can be accessed and shared, he said. U.S. law enforcement agencies, for example, need a court warrant to access most data.
Privacy advocates express concern about the “Big Brother” implications of such massive data-gathering, calling for more information and public debate about how predictive technology will be used.
“There’s often very little transparency into what’s being brought into the systems or how it’s being crunched and used,” said Rachel Levinson-Waldman, senior counsel to the National Security Program at the Brennan Center for Justice at New York University School of Law. “That also makes it very hard to go back and challenge information that might be incorrect.”
Computer algorithms also fail to understand the context of data, such as whether someone commenting on social media is joking or serious, Levinson-Waldman said.
Testoni’s company and others such as Intel Corp. and PredPol Inc. are among a handful of firms pioneering the use of predictive analytics and artificial intelligence for clients from local police departments to U.S. national security agencies.
More than 60 local police departments in the U.S. have started making use of a service sold by PredPol, which calls itself “The Predictive Policing Company,” to forecast where crimes might occur based on past patterns, said co-founder Jeff Brantingham.
What, Where, When
Its system, developed in collaboration with the Los Angeles Police Department, uses only three types of data: what type of crime occurred, when and where, Brantingham said.
Then, a software algorithm generates the probability of crime occurring in different locations, presented as 500-foot-by-500-foot squares on a computer display or a printed map. With that insight, police departments then can make decisions about how best to apply their resources, such as sending cops to a high-risk area, or which security cameras to monitor, Brantingham said.
PrePol’s system doesn’t make predictions about who will commit a crime, so it stops short of a system that might identify a terrorist in the making.
“Interdicting places is, by and large, an approach that is more in line with protecting civil liberties than interdicting people,” Brantingham said.
Even with such limits, privacy and civil liberties groups oppose the use of predicting policing technology as a threat to the Constitution’s promises of equal protection and due process.
“This is fortune-teller policing that uses deeply flawed and biased data and relies on vendors that shroud their products in secrecy,” Wade Henderson, president and chief executive officer of the Leadership Conference on Civil and Human Rights. “Instead of using predictive technology to correct dysfunctional law enforcement, departments are using these tools to supercharge discrimination and exacerbate the worst problems in our criminal justice system.”
Vast databases that companies have created for online commerce and communications could help law enforcement and national security agencies build predictive systems if they are allowed to tap into them. Technology companies have terms of service that set out how much personal information can be kept and sold to outside companies such as advertisers, and most resist handing over such data to the government unless a court orders them to do so.
Predictive analytics are already being used by companies like eBay Inc., Amazon.com Inc., and Netflix Inc. to crunch their users’ Internet activity to forecast what they might be interested in. Companies like Facebook Inc. and Twitter Inc. have access to over a billion social-media accounts. The storehouse of data on Americans will only grow with digital feeds from Internet-connected appliances and wearable devices.
Social media, in particular, is a valuable tool in tracking potential terrorist attacks, said Eric Feinberg, founding member of the Global Intellectual Property Enforcement Center, which is a private company. His firm has patented technology that can scan for hashtags across different social media platforms and in different languages for communications that indicate terrorist planning.
“Our software is about pattern analysis,” Feinberg said. “We focus on the communications stream.”
‘Open Source Indicators’
The U.S. government is working on initial efforts to gain insight into global social and political trends.
A program under the intelligence community’s research arm called Mercury seeks to develop methods for continuous and automated analysis of intercepted electronic communications “in order to anticipate and/or detect political crises, disease outbreaks, terrorist activity and military actions,” said Charles Carithers, spokesman for the Intelligence Advanced Research Projects Activity.
The agency also previously funded the Open Source Indicators program, which “developed methods for continuous, automated analysis of publicly available data in order to anticipate and/or detect significant societal events,” such as mass violence and riots, mass migrations, disease outbreaks and economic instability, Carithers said.
The CIA draws a distinction between using technology to anticipate events, versus predict them. The agency is using sophisticated algorithms and advanced analytics, along with publicly available data, to forecast events. The initial coverage focuses on the Middle East and Latin America.
“We have, in some instances, been able to improve our forecast to the point of being able to anticipate the development of social unrest and societal instability to within three to five days out,” said Andrew Hallman, the agency’s deputy director for digital innovation.
In its annual report in June, the Defense Science Board said, “Imagine if national leaders had sufficient time to act in emerging regional hot spots to safeguard U.S. interests using interpretation of massive data including social media and rapidly generate strategic options.”
“Such a capability may soon be achievable,” the board said. “Massive data sets are increasingly abundant and could contain predictive clues — especially social media and open-source intelligence.”
If U.S. intelligence agencies develop an advanced system to predict terrorist acts they might call it “Total Information Awareness.” Except that name has already been used, with unhappy results.
Retired Admiral John Poindexter created the “Total Information Awareness” program for the Pentagon’s Defense Advanced Research Projects Agency in 2002 to find and monitor terrorists and other national security threats using data and technology.
The program became so controversial, especially over concerns that privacy rights would be violated, that Congress canceled funding for Poindexter’s office in 2003.
Having been there and done that, Poindexter now says predicting terrorism is possible but would require a lot of data, such as banking information, analysis of social media, travel records and classified material.
The system also has to include strong privacy protections that the public can review, said Poindexter, who said he was working on such a “privacy protection application” when his program was canceled.
“You have to develop public trust in the way this is going to work,” said Poindexter, who continued developing the technology after leaving government through Saffron Technology Inc., a cognitive computing company that Intel bought in 2015 for an undisclosed price. Intel declined to comment.
“The government’s priorities should be to solve the privacy issue and start ingesting massive amounts of data into memory bases,” Poindexter said. “You have to get the public on board with the idea that we can collect and search information on terrorist planning that doesn’t have an adverse impact on innocent people.”
Article Disclaimer: This article was published by Insurance Journal and was retrieved on 10/15/2016 and posted here at INDESEEM for information and educational purposes only. The views, ideas, materials and content of the article remains those of the author. Please cite the original article accordingly.
By Steve Sonka and Yu-Tien Cheng, University of Illinois November 03, 2015 | 7:01 am EST
Big Data — the current buzzword of choice. Today it’s very easy to be overwhelmed by the hype promoting Big Data. Farm media, newspapers and general media, and conference speakers all extol the future transforming effects of Big Data, stressing that “Big Data will be essential to our future, whatever it is.” The goal of this article, and the series of five that follow, is to begin to unravel that “whatever it is” factor for agriculture.
We’ll definitely explore “whatever it is” from a managerial, not a computer science, perspective. Potential implications for agriculture will be the primary emphasis of the following set of articles:
1.Big Data: More Than a Lot of Numbers! This article emphasizes the role of analytics enabling the integration of various data types to generate insights. It stresses that the “Big” part of Big Data is necessary but it’s the “Data” part of Big Data that’s likely to affect management decisions.
2.Precision Ag: Not the Same as Big Data But… Today, it’s easy to be confused by the two concepts, Precision Ag and Big Data. In addition to briefly reviewing the impact of Precision Ag, this article stresses that Big Data is much more than Precision Ag. However, Precision Ag operations often will generate key elements of the data needed for Big Data applications.
3.Big Data in Farming: Why Matters! Big Data applications generally create predictions based on analysis of what has occurred. Uncertainty in farming, based in biology and weather, means that the science of agriculture (the Why) will need to be integrated within many of the sector’s Big Data applications.
4.Big Data: Alive and Growing in the Food Sector! Big Data already is being extensively employed at the genetics and consumer ends of the food and ag supply chain. This article will stress the potential for capabilities and knowledge generated at these levels to affect new opportunities within production agriculture.
5.A Big Data Revolution: What Would Drive It? Management within farming historically has been constrained by the fundamental reality that the cost of real-time measurement of farming operations exceeded the benefits from doing so. Sensing capabilities (from satellites, to drones, to small-scale weather monitors, to soil moisture and drainage metering) now being implemented will materially lessen that constraint. Doing so will create data streams (or is it floods?) by which Big Data applications can profoundly alter management on the farm.
6.A Big Data Revolution: Who Would Drive It? Over the last 30 years, novel applications of information technology have caused strategic change in many sectors of the economy. This article draws on those experiences to inform our thinking about the potential role of Big Data as a force for change in agriculture.
Big Data: More Than a Lot of Numbers!
Innovation has been critical to increased agricultural productivity and to support of an ever increasing global population. To be effective, however, each innovation had to be understood, adopted, and adapted by farmers and other managers.
Although Big Data is relatively new, it is the focus of intense media speculation today. However, it is important to remember that Big Data won’t have much impact unless it too is understood, adopted and adapted by farmers and other managers. This article provides several perspectives to support that process.
Big Data Defined
“90% of the data in the world today has been created in the last two years alone” (IBM, 2012).
In recent years, statements similar to IBM’s observation and associated predictions of a Big Data revolution have become increasingly more common. Some days it seems like we can’t escape them!
Actually, Big Data and its hype are relatively new. As shown in Figure 1, use of the term, Big Data, was barely noticeable prior to 2011. However, the term’s usage literally exploded in 2012 and 2013, expanding by a factor of 5 in just two years.
With all new concepts, it’s nice to have a definition. Big Data has had more than its fair share. Two that we find helpful are:
•The phrase “big data” refers to large, diverse, complex, longitudinal, and/or distributed data sets generated from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available today and in the future (National Science Foundation, 2012).
•Big Data is high-volume, -velocity, and -variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making (Gartner IT Glossary, 2012).
These definitions are impressive. However, they really don’t tell us how Big Data will empower decision makers to create new economic and social value.
From Technology to Value
In the next few paragraphs, we’ll move beyond those definitions to explore how application of Big Data fosters economic growth. In this article, we’ll present non-ag examples because today there is more experience outside of agriculture. The following articles in this series will focus on agriculture.
Big Data generally is referred to as a singular thing. It’s not! In reality, Big Data is a capability. It is the capability to extract information and craft insights where previously it was not possible to do so.
Advances across several technologies are fueling the growing Big Data capability. These include, but are not limited to computation, data storage, communications, and sensing.
These individual technologies are “cool” and exciting. However, sometimes a focus on cool technologies can distract us from what is managerially important.
A commonly used lens when examining Big Data is to focus on its dimensions. Three dimensions (Figure 2) often are employed to describe Big Data: Volume, Velocity, and Variety. These three dimensions focus on the nature of data. However, just having data isn’t sufficient.
Analytics is the hidden, “secret sauce” of Big Data. Analytics refers to the increasingly sophisticated means by which analysts can create useful insights from available data.
Now let’s consider each dimension individually:
Interestingly, the Volume dimension of Big Data is not specifically defined. No single standard value specifies how big a dataset needs to be for it to be considered “Big”.
It’s not like Starbucks; where the Tall cup is 12 ounces and the Grande is 16 ounces. Rather, Big Data refers to datasets whose size exceeds the ability of the typical software used to capture, store, manage, and analyze.
This perspective is intentionally subjective and what is “Big” varies between industries and applications. An example of one firm’s use of Big Data is provided by GE — which now collects 50 million pieces of data from 10 million sensors everyday (Hardy, 2014).
GE installs sensors on turbines to collect information on the “health” of the blades. Typically, one gas turbine can generate 500 gigabytes of data daily. If use of that data can improve energy efficiency by 1%, GE can help customers save a total of $300 billion (Marr, 2014)! The numbers and their economic impact do get “Big” very quickly.
The Velocity dimension refers to the capability to acquire, understand, and respond to events as they occur. Sometimes it’s not enough just to know what’s happened; rather we want to know what’s happening. We’ve all become familiar with real-time traffic information available at our fingertips.
Google Map provides live traffic information by analyzing the speed of phones using the Google Map app on the road (Barth, 2009). Based on the changing traffic status and extensive analysis of factors that affect congestion, Google Map can suggest alternative routes in real-time to ensure a faster and smoother drive.
Variety, as a Big Data dimension, may be the most novel and intriguing. For many of us, our image of data is a spreadsheet filled with numbers meaningfully arranged in rows and columns.
With Big Data, the reality of “what is data” has wildly expanded. The lower row of Figure 3 shows some newer kinds of sensors in the world, from cell phones, to smart watches, and to smart lights.
Cell phones and watches can now monitor users’ health. Even light bulbs can be used to observe movements, which help some retailers to detect consumer behaviors in stores to personalize promotions (Reed, 2015). We even include human eyes in Figure 3, as it would be possible to track your eyes as you read this article.
The power of integrating across diverse types and sources of data is commercially substantial. For example, UPS vehicles are installed with sensors to track the engine performance, car speed, braking, direction, and more (van Rijmenam, 2014).
By analyzing these and other data, UPS is able to not only monitor the car engine and driving behavior but also suggest better routes, leading to substantial savings of fuel (Schlangenstein, 2013).
So, Volume, Variety, and Velocity can give us access to lots of data, generated from diverse sources with minimal lag times. At first glance that sounds attractive. Fairly quickly, however, managers start to wonder, what do I do with all this stuff?
Just acquiring more data isn’t very exciting and won’t improve agriculture. Instead, we need tools that can enable managers to improve decision-making; this is the domain of Analytics.
One tool providing such capabilities was recently unveiled by the giant retailer, Amazon (Bensinger, 2014). This patented tool will enable Amazon managers to undertake what it calls “anticipatory shipping”, a method to start delivering packages even before customers click “buy”.
Amazon intends to box and ship products it expects customers in a specific area will want but haven’t yet ordered. In deciding what to ship, Amazon’s analytical process considers previous orders, product searches, wish lists, shopping-cart contents, returns, and even how long an Internet user’s cursor hovers over an item.
Analytics and its related, more recent term, data science, are key factors by which Big Data capabilities actually can contribute to improved performance, not just in retailing, but also in agriculture. Such tools are currently being developed for the sector, although these efforts typically are at early stages.
In this discussion, we explored the dimensions of Big Data — 3Vs and an A. The Volume dimension links directly to the “Big” component of Big Data. Variety, Velocity and Analytics relate to the “Data” aspect. While Volume is important, strategic change and managerial challenges will be driven by Variety, Velocity, and especially Analytics.
Unfortunately, media and advertising tend to emphasize Volume; it’s easy to impress with really, really large numbers. But farmers and agricultural managers shouldn’t be distracted by statistics on Volume.
Big Data’s potential doesn’t rest on having lots of numbers or even having the world’s largest spreadsheet. Instead, the ability to integrate across numerous and novel data sources is key.
The point of doing this is to create new managerial insights that enable better decisions. While Volume and Variety are necessary, Analytics is what allows for fusion across data sources and new knowledge to be created.
Emphasizing the critical role of Variety of data sources and Analytics capabilities is particularly important for production agriculture. Individual farms and other agricultural firms aren’t likely to possess the entire range of data sources needed to optimize value creation.
Further, sophisticated and specialized Analytics competencies will be required. To be effective, however, the computer science competencies also need to be combined with knowledge of the business and science aspects of agricultural production.
At times this sounds complicated and maybe threatening. Visiting with a farmer from Ohio about this topic recently, he made a comment that is helpful in unraveling this complexity. He noted that effective use of Big Data for him as a Midwestern farmer is mainly about relationships.
The relevant question is, “Which input and information suppliers and customers can provide the Big Data capabilities for him to optimize his decisions?” And he noted, “For farmers, managing those relationships isn’t new!”
Article Disclaimer: This article was published by Agprofessional.com and was retrieved and posted at INDESEEM for information and educational purposes only. The views, opinions, thoughts, and information expressed in this article are those of the authors. Please cite the original and INDESEEM accordingly.
Jim Melvin, Public Service Activities October 29, 2015
CLEMSON — While researchers at Clemson University have recently announced an array of breakthroughs in agricultural and life sciences, the size of the data sets they are now using to facilitate these achievements is like a mountain compared to a molehill in regard to what was available just a few years ago.
But as the amount of “Big Data” being generated and shared throughout the scientific community continues to grow exponentially, new issues have arisen. Where should all this data be stored and shared in a cost-effective manner? How can it be most efficiently transferred across advanced data networks? How will researchers be interacting with the data and global computing infrastructure?
A team of trail-blazing scientists and information technologists at Clemson is working hard to answer these questions by studying ways to simplify collaboration and improve efficiency.
“I use genomic data sets to find gene interactions in various crop species,” said Alex Feltus, an associate professor in genetics and biochemistry at Clemson. “My goal is to advance crop development cycles to make crops grow fast enough to meet demand in the face of new economic realities imposed by climate change. In the process of doing this, I’ve also become a Big Data scientist who has to transfer data across networks and process it very quickly using supercomputers like the Palmetto Cluster at Clemson. And I recently found myself — especially in just the past couple of years — bumping up against some pretty serious bottlenecks that have slowed down my ability to do my best possible work.”
Big Data, defined as data sets too large and complex for traditional computers to handle, is being mined in new and innovative ways to computationally analyze patterns, trends and associations within the field of genomics and a wide range of other disciplines. But significant delays in Big Data transfer can cause scientists to give up on a project before they even start.
“There are many available technologies in place today that can solve the Big Data transfer problem,” said Kuang-Ching “KC” Wang, associate professor in electrical and computer engineering and also networking chief technology officer at Clemson. “It’s an exciting time for genomics researchers to vastly transform their workflows by leveraging advanced networking and computing technologies. But to get all these technologies working together in the right way requires complex engineering. And that’s why we are encouraging genomics researchers to collaborate with their local IT resources, which include IT engineers and computer scientists. This kind of cross-discipline collaboration is reflecting the national research trends.”
“Universities and other research organizations can spend a lot of money building supercomputers and really fast networks,” Feltus said. “But with research computing systems, there’s a gulf between the ‘technology people’ and the ‘research people.’ We’re trying to bring these two groups of experts together and learn to speak a common dialect. The goal of our paper is to expose some of this information technology to the research scientists so that they can better see the big picture.”
It won’t be long before the information being generated by high-throughput DNA sequencing will soon be measured in exabytes, which is equal to one quintillion bytes or one billion gigabytes. A byte is the unit computers use to represent a letter, number or symbol.
In simpler terms, that’s a mountain of information so immense it makes Everest look like a molehill.
“The technology landscape is really changing now,” Wang said. “New technologies are coming up so fast, even IT experts are struggling to keep up. So to make these new and ever-evolving resources available quickly to a wider range of different communities, IT staffs are more and more working directly with domain science researchers as opposed to remaining in the background waiting to be called upon when needed. Meanwhile, scientists are finding that the IT staffs that are the most open-minded and willing to brainstorm are becoming an invaluable part of the research process.”
The National Science Foundation and other high-profile organizations have made Big Data a high priority and they are encouraging scientists to explore the issues surrounding it in depth. In August 2014, Feltus, Wang and five cohorts received a $1.485 million NSF grant to advance research on next-generation data analysis and sharing. Also in August 2014, Feltus and Walt Ligon at Clemson received a $300,000 NSF grant with Louisiana State and Indiana universities to study collaborative research for computational science. And in September 2012, Wang and James Bottum of Clemson received a $991,000 NSF grant to roll out a high-speed, next-generation campus network to advance cyberinfrastructure.
“NSF is increasingly showing support for these kinds of research collaborations for many of the different problem domains,” Wang said. “The sponsoring organizations are saying that we should really combine technology people and domain research people and that’s what we’re doing here at Clemson.”
Feltus, for one, is sold on the concept. He says that working with participants in Wang’s CC-NIE grant has already uncovered a slew of new research opportunities.
“During my career, I’ve been studying a handful of organisms,” Feltus said. “But because I now have much better access to the data, I’m finding ways to study a lot more of them. I see fantastic opportunities opening up before my eyes. When you are able to give scientists tools that they’ve never had before, it will inevitably lead to discoveries that will change the world in ways that were once unimaginable.”
This material is based upon work supported by the National Science Foundation (NSF) under Grant Nos. 1443040, 1447771 and 1245936. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF.
Article Disclaimer: This article was published by Clemson University and was retrieved on 10/29/2015 and posted here at INDESEEM for information and educational purposes only. The views, thoughts, research findings, and information contain in the article remains those of the authors. Please cite the original and this source accordingly.
Rising temperatures and lower rainfall have already affected crop yields in areas of southern Australia, and yields will continue to be affected, the report said.
Greater frequency and intensity of extreme weather events, like bushfires, droughts and cyclones will lead to decreased productivity across the agricultural sector, including the livestock and dairy industries.
The prospect of reduced agricultural production is a big issue for Australia, where the gross value of all agricultural commodities produced was roughly $50 billion for the calendar year ending June 30, 2014.
The agriculture, forestry and fishing sector employed 2.8 per cent of all employed Australians in August 2014, and represented 2.4 per cent of real gross value added to Australia’s economy in 2013-14, data from Austrade reveals.
Some agricultural commodities – wheat and frozen, chilled or fresh beef – are in Australia’s top ten exports.
“Between 1982 and 2012 more than half of Australia’s wheat-growing regions have improved their WUE [water use efficiency] by at least 50 per cent,” the GRDC report says.
“Many areas have achieved even more than this.”
Young farmer, Joshua Gilbert, works on the family cattle stud in Nabiac NSW.
He is the chair of Young Farmers, a sub group of the NSW Famers Association.
Mr Gilbert said farmers had already been dealing with the challenges of climate change without necessarily knowing what to call it.
However, many farmers are recognising that changing conditions on their land are due to climate change and some were making steps to protect their farms from the effects, Mr Gilbert said.
“I guess what we’ve seen is there is a lot more knowledge from younger farmers,” Mr Gilbert said.
Young farmers: why do young people choose to live on the land?
Why do young people choose to become farmers in this day and age?
He said seasonal variability, including the unknowns of rainfall and extreme weather events, have been affecting farmers for years.
The long term changes to the climate would worsen this variability, as farmers could expect more droughts and bushfires in future, the Climate Council’s report said.
SBS contacted the Australian Livestock Exporters Council and the National Farmers Federation, to ask if they were concerned about the effects of climate change on Australia’s agriculture sector. Both were unavailable for comment.
Key findings of the Climate Council’s report:
Climate change is making weather patterns more extreme and unpredictable, with serious consequences for Australia’s agricultural production
Climate change is driving an increase in the intensity and frequency of hot days and heatwaves in Australia, changing rainfall patterns, increasing the severity of droughts, and driving up the likelihood of extreme fire danger weather.
Average rainfall in southern Australia during the cool season is predicted to decline further, and the time spent in extreme drought conditions is projected to increase.
Water scarcity, heat stress and increased climatic variability in our most productive agricultural regions, such as the Murray Darling Basin, are key risks for our food security, economy, and dependent industries and communities.
Climatic challenges could result in imports of key agricultural commodities such as wheat increasingly outweighing exports.
More frequent and intense heatwaves and extreme weather events are already affecting food prices in Australia
Climate change is increasing the variability of crop yields.
Food prices during the 2005- 2007 drought increased at twice the rate of the Consumer Price Index (CPI) with fresh fruit and vegetables the worst hit, increasing 43 per cent and 33 per cent respectively.
Reductions of livestock numbers during droughts can directly affect meat prices for many years.
Rainfall deficiencies in parts of Western Australia and central Queensland are projected to reduce total national crop production by 12 per cent in 2014-15, and the value of beef and veal exports by 4 per cent.
Cyclone Larry destroyed 90 per cent of the North Queensland banana crop in 2006, affecting supply for nine months and increasing prices by 500 per cent.
The 2009 heatwave in Victoria decimated fruit crops, with significant production losses of berry and other fruit crops.
Climate change is affecting the quality and seasonal availability of many foods in Australia
Up to 70% of Australia’s wine-growing regions with a Mediterranean climate (including iconic areas like the Barossa Valley and Margaret River) will be less suitable for grape growing by 2050. Higher temperatures will continue to cause earlier ripening and reduced grape quality, as well as encourage expansion to new areas, including some regions of Tasmania.
Many foods produced by plants growing at elevated CO2 have reduced protein and mineral concentrations, reducing their nutritional value.
Harsher climate conditions will increase use of more heat-tolerant breeds in beef production, some of which have lower meat quality and reproductive rates.
Heat stress reduces milk yield by 10-25 per cent and up to 40 per cent in extreme heatwave conditions.
The yields of many important crop species such as wheat, rice and maize are reduced at temperatures more than 30°C.
Australia is extremely vulnerable to disruptions in food supply through extreme weather events
There is typically less than 30 days supply of non-perishable food and less than five days supply of perishable food in the supply chain at any one time. Households generally hold only about a 3-5 day supply of food. Such low reserves are vulnerable to natural disasters and disruption to transport from extreme weather.
During the 2011 Queensland floods, several towns such as Rockhampton were cut off for up to two weeks, preventing food resupply. Brisbane came within a day of running out of bread.
Australia’s international competitiveness in many agricultural markets will be challenged by the warming climate and changing weather patterns
Australia is projected to be one of the most adversely affected regions from future changes in climate in terms of reductions in agricultural production and exports.
Climate impacts on agricultural production in other countries will affect our competitiveness, especially if warmer and wetter conditions elsewhere boost production of key products such as beef and lamb.
If the current rate of climate change is maintained, adaptation to food production challenges will be increasingly difficult and expensive
By 2061, Australia’s domestic demand for food could be 90 per cent above 2000 levels, with a similar increase in export demand.
Transitioning to a new, lowcarbon economy is critical to avoiding the most dangerous impacts of climate change.
The longer action on climate change is delayed, the more likely it is that progressive, small-scale adaptive steps to cope with climate change will become increasingly inadequate and larger, more expensive changes will be required.
Article Disclaimer: This article was published online by SBSand was retrieved on 10/18/2015 and posted here at INDESEEM for information and educational purposes only. The views, findings, thoughts, and opinions expressed in article are those of the author and his source. Please cite the original source accordingly.
On the MoneyWeek cruise last week there was much talk about sustainability – about pollution, about climate change, about energy usage and waste and generally about the ability of the earth to keep giving and giving to a growing human population.
Regular readers will know that, being great believers in human ingenuity, we are generally optimistic on these things. That’s a position increasingly borne out by new technology.
Consider agriculture. The truth is that, while there are obviously iffy moments (North Korea, China’s ‘great leap forward’, etc) agricultural yields always rise over time as new techniques and technologies take farming to new levels. Some of the our major crops have seen yields rise ten-fold in the last 200 years – corn yields alone are up four-fold since the 1950s.
We’re reaching one of those new levels right now thanks to ‘precision agriculture’ – a mixture of big data and (coming soon) robotics that helps farmers to customise the cultivation of every square foot of their land.
According to IBM (which is active in the area) by “collecting real-time data on weather, soil and air quality, crop maturity and even equipment and labour costs and availability” and then using predictive analytics, farmers can make smarter decisions, decisions that result in better productivity, less waste, fewer pesticides, less energy and water usage and, in the end, fewer people.
One of the best descriptions of how all this works comes from Jess Lowenberg-DeBoer in Foreign Affairs magazine. The key to getting this right is to use what is known as ‘variable rate technology’ to map every part of a field for things such as phosphates, acidity, potassium and the like, and then to treat each part of any field with the fertilisers that suits it and to see which fields will work best for which crops at which time of year.
Right now, this means putting sensors in the soil manually to check which bit needs what, something that means that, in the US, they are used only every 2.5 acres (in Brazil it is every 12). That’s a start, but it also means that “huge productivity gains” are missed – soil can change every few feet.
It’s also expensive – which is why only 20% of US farms are fully precision-farmed. However, new sensors are in development that can be put into the ground every few feet, take regular readings and report those readings via GPS, something that will lead to a system whereby each plant effectively reports its needs as the tractor approaches. Fertiliser drops can then be automatically adjusted as the vehicle moves down a field.
There are also sensors in development that can check on the colours of plants to judge their water and nitrogen requirements.
There’s more. GPS data can also be used to ‘auto-guide’ tractors. Manual driving is skilled and expensive, and involves a lot of overlapping (farmers worry more about missing bits of the field than they do about over fertilising), says Lowenberg-DeBoer. Auto-guidance takes away this problem (nothing is missed and nothing is done twice).
That takes us on to the next bit.
Some 60% of UK farms are thought to use some kind of precision farming techniques (sensor systems, cameras, drones, virtual field maps, GPS-guided tractors etc) but if tractors can be guided via GPS they don’t need drivers at all. And if tractors don’t need drivers (driverless tractors are being tested and introduced in the US), they don’t need to be particularly big.
The future might be about entirely automated agriculture – fields looked after by bots on the ground checking for weeds, pests and fertiliser levels, while drones check the weather above and report all information back to the farmer’s central systems.
There’s a fun piece on this in the Guardian. The key point to note is that this isn’t all futuristic imagining about how we might feed a future world, it is technology we have and we are beginning to use. As one farmer told the Guardian, “in ten years we will look back at today and think that we were dinosaurs in our methods”.
So how do we invest in all this? There are the big players – IBM and Accenture being the obvious players in digital agriculture, and John Deere being the obvious in the equipment area (they’ll be making the driverless tractors). Otherwise a lot of the interesting companies in the area are unlisted (the UK’s Precision Decisions for example).
Finally, we talked at length on the MoneyWeek Cruise about a French firm that has a finger in many of the relevant pies here. I will be telling John all about it in our podcast on Friday. Look out for that!
Article Disclaimer:This article was published online at Money Weekand was retrieved on 10/18/2015 and posted at INDESEEM for educational and information purposes only. The views, thoughts and opinions expressed in the article are those of the authors and their sources. Please cite the original source accordingly.
Researchers are taking advantage of Big Data analytics to help fight climate change. Photo: r2hox
Data-driven climate adaptation could revive rice yields in Colombia and beyond.
From forecasting presidential elections to predicting disease outbreaks, analysts are finding ways to turn Big Data — the immense stocks of information collected in computers worldwide — into an invaluable resource for planning and decision-making. Now, scientists at the International Center for Tropical Agriculture (CIAT) have applied Big Data tools to pinpoint strategies that work for small-scale farmers in a changing climate.
“With the availability of modern information technology, we have an opportunity in agriculture to make more informed decisions based on the data,” says project leader Daniel Jimenez. His team has been studying rice in Colombia, where CIAT leads a major research partnership between the Colombian government and the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS).
Colombia enjoys tropical sunshine, diverse landscapes and multiple crops per year, so agriculture should be driving its economic development. The whole region of Latin America was dubbed “the next global breadbasket” in a recent report from the Inter-Amercian Development Bank and theGlobal Harvest Initiative. Yet Colombia’s rice sector is in trouble. Between 2007 and 2012, the yield from irrigated rice mysteriously dropped from 6 to 5 tons per hectare, erasing the increase achieved over the previous decade.
The cause of the shrinking yields is unknown, but climate change is a leading suspect. Subtle shifts in rainfall as well as more extreme weather are forcing rice growers to toss aside old assumptions about when, where and what to plant.
CIAT’s initiative in Colombia has several research groups studying the impacts and looking for ways to better cope with climate change. The Big Data team believed they could uncover answers in existing data sets — historical measurements of climate, yields and farming practices, gathered and filed away by Colombia’s National Federation of Rice Growers (FEDEARROZ).
Following the success of CCAFS’ Big Data analysis in Colombia, the project will be rolled out in Nicaragua, Peru, Argentina and Uruguay.
Crunching the numbers
First, they needed the data. CIAT brought their idea to FEDEARROZ, which keeps country-wide records. It was a delicate proposition, as open data sharing is still in its infancy in many places and primary data holders often have legitimate concerns about how the information they share will be used.
Once FEDEARROZ understood and trusted CIAT’s intentions, the federation agreed to provide an annual rice survey, harvest monitoring records and results from agronomic experiments. The CIAT analysts used advanced algorithms borrowed from fields like biology, robotics and neuroscience to comb through these data and tease out patterns that lined up with weather records.
The results are highly site-specific. In the town of Saldaña, for example, the analysis showed that rice yields were limited mainly by solar radiation during the grain-ripening stage. Meanwhile, in the town of Espinal, limiting factors differed by rice variety. Looking at the variety most commonly cultivated in Espinal, the team found that it suffered from sensitivity to warm nights. This suggests that farmers in Saldaña can boost yields by lining up their sowing dates with sunnier seasons, whereas those in Espinal may need to choose a variety more suited to the local climate.
For even more predictive power, the scientists tried pairing the historical records with state-of-the-art seasonal forecasts generated by a separate CIAT team in Colombia. They searched for weather patterns in previous years that resembled the forecast, and checked which varieties did best in those years. In this way, researchers can learn from the past to anticipate what is coming. Farmers can be advised months in advance about tried-and-true rice varieties and planting dates, even in the midst of erratic climate patterns. Applying this information could potentially raise yields by 1 to 3 tons per hectare, Jimenez says.
Climate change obligates us to manage our food systems in a more dynamic way, and big data offers the most effective way to achieve this, like the hoe and spade, these new tools are becoming crucial implements for global food production” – Andy Jarvis, director of CIAT’s Policy Analysis Research Area and CCAFS Flagship 1 Leader.
Ready for scale-up
The project won the UN Global Pulse’s Big Data Climate Challenge last year, with contest organizers calling it “uniquely innovative.” Now the team is ready to scale up the techniques from their pilot. Branching out to other countries and crops will help ensure that these methods are adaptable and useful in different contexts.
With support from CCAFS and the World Bank, CIAT researchers will partner with the Fund for Irrigated Rice in Latin America (FLAR) to introduce the approach to rice growers associations in other countries, starting with Nicaragua, Peru, Argentina and Uruguay in 2015 and 2016. FLAR plans to include the big data tools in its agronomy program, which will deliver them to a wide range of actors and institutions — farmers, breeders and agricultural support organizations. The projects in Latin America will serve as case studies, potentially laying the groundwork to bring this approach to even more farmers elsewhere.
The team will also be working to improve the new tools. Further research will incorporate data on soils, pests, diseases, costs and other factors to increase explanatory power. The researchers will test new ways of capturing and analysing data that could further strengthen the approach.
At the same time, they want to reach out to potential users and promote what they call a “revolution in data-driven agronomy”. Jimenez and his co-workers envision scientists routinely using advanced analyses of commercial data in their research, breeders gathering feedback on the real-world performance of their strains, and agricultural support organizations helping farmers make informed decisions and become more resilient to the vagaries of the weather.