Energy consumption by indoor cannabis farms will soon rival that of data centers.
What’s the carbon cost of legal marijuana?
It turns out that every little joint and edible adds up. A new report finds that marijuana cultivation accounts for as much as 1 percent of energy use in states such as Colorado and Washington. The electricity needed to illuminate, dehumidify, and air-condition large growing operations may soon rival the expenditures from big data centers, which themselves emit an estimated 100 million metric tons of carbon into the atmosphere every year.
The marijuana industry’s energy use “is immense,” said the report’s author, Kelly Crandall, an analyst for EQ Research, a clean energy policy research institute. Her report found that a large grow operation can have energy expenditures of 2,000 watts per square meter because of its constant need for lighting and ventilation.
The carbon cost of cannabis is likely to grow. In November nine states will vote on marijuana legalization, including California, which could become the biggest player in the legal marijuana industry.
Crandall, who started studying the issue a few years ago while working as the energy strategy coordinator for the city of Boulder, Colorado, said she was surprised “by the magnitude of the industry and its utility bills.” She said she was also struck by how hard it was for the industry to switch to energy-efficient options. “I find it kind of a conundrum that it’s a very cash-rich industry, but because of banking restrictions it also has a difficulty investing in solar and efficiency.” Because marijuana cultivation is still a criminal offense under federal law, most banks will not do business with the industry even in states where it is legal.
Stephen Jensen, president of Green Barn Farms in Addy, Washington, acknowledges that financing “is a big problem.” He added that the marijuana industry is in many ways still learning to do business in a new legal framework, which has slowed its adoption of energy-efficient technologies. “Most of the growers that have converted from this new legal world have come from the indoor space,” he said, which means they are transitioning from working under the radar to operating legally. “That’s what they know and what the industry knows.”
Jensen said his pot cooperative has spent the past two years learning more about growing outdoors, which has allowed it to achieve dramatically lower electricity costs than many other growers. Now it just uses one building to host mother plants and cloning, while the rest is grown in “sun-powered” greenhouses. He said the energy costs average between $1,250 and $1,500 a month, compared with $25,000 to $40,000 for equivalent indoor growing operations in Washington.
Other parts of the industry are also adapting. A program called Certified Kind, based in Eugene, Oregon, offers growers an alternative to the “organic” label, which they are not permitted to use by federal law. The certification not only requires growers to forgo the use of pesticides but also has strict guidelines for energy use and requires growers to conduct energy audits.
In January, Humboldt County, home to a multibillion-dollar marijuana industry, became the first county in California to regulate cannabis cultivation. The board of supervisors gave growers until the end of the year to register and obtain permits that govern their use of water, energy, and rodenticides.
Crandall said her report pulled from utility filings, interviews, and other published information, although primary information and current research into grow centers’ energy costs was hard to find. “It’s pretty difficult to get people to comment on the record about this sort of thing,” she said.
She also found a dearth of other published research about the industry’s energy consumption. The only real prior study appears to have been published in 2012, before states such as Colorado, Oregon, and Washington voted to legalize cannabis. That study estimated that the industry’s energy expenditures at the time were $6 billion per year.
Andrew Black, certification director for Certified Kind, said the legal cannabis industry is evolving quickly, which will allow it to become more energy efficient. “Data that was never collected and analyzed due to cannabis prohibition is starting to come to light,” he said. “As normal business practices take root in the cannabis community, and especially as the sale price of cannabis drops, I think you will see a concerted effort toward finding the most economically and energy-efficient way to grow the crop.” Both Jensen and Black said they see the future of the industry in outdoor cultivation, not in inefficient warehouse grows.
Meanwhile, local utilities are just starting to look into ways to work with growers. Earlier in the year Washington’s Puget Sound Energy gave a grower called Trail Blazin’ Productions a $152,000 rebate after the organization invested in LED lighting, which uses less energy and produces less heat.
Crandall said the point of her report was not to focus solely on the magnitude of the marijuana industry’s energy use but to point out ways it could begin to collaborate with utilities and other organizations to reduce energy waste. “A lot of my recommendations involve collaborations among types of entities that may not necessarily have worked together in this way,” she said. “I think utilities don’t quite know how to reach out to an industry like this yet, and the industry doesn’t 100 percent know what their options are.”
She said she hopes her report brings the issue of cannabis’ energy expenditures into the light: “I don’t think anyone wants to discourage energy use, but they do want to discourage wasteful energy use and help people get more opportunities for clean energy.”
Article Disclaimer: This article was originally published on TakePart. Reprinted with permission. John R. Platt covers the environment, technology, philanthropy and more forScientific American, Conservation, Lion and other publications.
Amid 170,000-plus attendees and 2,700 sessions, it is hard to not be awed by the sheer spectacle that wasDreamforce, the annual conference hosted by customer relationship management tools developer and service provider Salesforce.com But beyond rumors it will buy Twitter, why should Salesforce.com and CRM matter to small and local community radio, and what can radio learn?
Whether it is Tony Robbins or U2, Dreamforce attracts big names and marquee corporations. For good reason — Salesforce.com has made a tremendous name for itself across many industries, from finance to retail to every sector of technology imaginable. Some of the world’s biggest nonprofits use it to manage donor relations.
“All good,” you say, “but who cares?”
Hear me out. The noncommercial media space, including community radio and public media, has much to learn from successful nonprofits using data and technology to grow. The analytics revolution that Salesforce.com and competitors have ushered into modern life is also a chance for community radio and public media to assess what is most important. It matters because contributors have new expectations. It also matters because technology can help stations focus less on paperwork and more on the relationships with their supporters.
Three key things at Dreamforce struck me.
Community radio can use technology to grow what people expect of it. At Dreamforce there were so many instances of nonprofits using data, mobile and service to engage supporters in ways that press community radio to consider how it can inspire members and underwriters, and expand its own service. One UK nonprofit takes public concerns for the homeless to smartphones by allowing geolocation of people in need to service providers. Black Girls Code and Code 2040leaders shared stories about how they made alliances with businesses work best for their constituencies. Discussions like this are incredibly instructive for community radio, which often fancies itself as a voice for localism and subcommunities. Technology gives a chance to realize these ideals in a new, dynamic and creative way.
Community radio needs to embrace the new normal of data. Community radio collects all manner of information — recordings, volunteer information, etc. — but is missing a golden opportunity to do what it does better. More and more nonprofits are seeing how important it is to use data to show donors they care. Others still struggle. On the corporate side, Apple can tell you what a customer prefers and what they buy. Similarly, more and more nonprofits can track what a donor supports most, their average gift and when they’re most inclined to give. This level of tracking is eschewed in some circles as invasive. However, the reality is that more people, particularly those who give to charity, are those who organizations need to value more. In my public media work, I’ve talked to many members who feel the fact an organization doesn’t know their giving habits equates to not caring about them personally. The world today has conditioned most people to expect connectedness as never before. They expect to give out an email address and assume an organization has their billing information and giving history on file. Yet a 2014 study indicates catering to the new expectations of customers is among the lowest priorities. Community radio would benefit by switching it up.
The touch always matters most. Among the tiny and massive nonprofits at Dreamforce, the objective of all of these cool gizmos was clear: to make each organization’s people more effective at what they do, and to enable them to have the most information possible for quality contacts with donor-members. Staff change, addresses change, but all nonprofits know their communication needs to be consistent and smart. The longtime supporter should have assurances that even new people know their importance to an organization, their history and what matters to them. A new donor should have regular, but unobstrusive, contact and a smooth ride into an organization’s world. As community radio leaders are well aware, it is tough to raise money and convert the casual observer to active giver. Technology can only enhance the contact, but it’s that moment that matters most.
Community and public radio, and, really, all nonprofits, have some common cause in how your average business relates to a consumer. Where a business is trying to sell you an aesthetic, such as trust, a community radio station wants you to give money out of a higher ideal: mission, culture or a contribution to the commons. Community radio outlets are special snowflakes all, but we share the same challenges. Dreamforce demonstrates but one example of ways to tackle our biggest puzzles.
Article Disclaimer: This article was published at the Radio World and retrieved on 10/15/2016 and posted at INDESEEM for information and educational purposes only. The views, ideas, materials, and content of the article remains those of the author. Please cite the original source accordingly.
By Steve Sonka and Yu-Tien Cheng, University of Illinois November 03, 2015 | 7:01 am EST
Big Data — the current buzzword of choice. Today it’s very easy to be overwhelmed by the hype promoting Big Data. Farm media, newspapers and general media, and conference speakers all extol the future transforming effects of Big Data, stressing that “Big Data will be essential to our future, whatever it is.” The goal of this article, and the series of five that follow, is to begin to unravel that “whatever it is” factor for agriculture.
We’ll definitely explore “whatever it is” from a managerial, not a computer science, perspective. Potential implications for agriculture will be the primary emphasis of the following set of articles:
1.Big Data: More Than a Lot of Numbers! This article emphasizes the role of analytics enabling the integration of various data types to generate insights. It stresses that the “Big” part of Big Data is necessary but it’s the “Data” part of Big Data that’s likely to affect management decisions.
2.Precision Ag: Not the Same as Big Data But… Today, it’s easy to be confused by the two concepts, Precision Ag and Big Data. In addition to briefly reviewing the impact of Precision Ag, this article stresses that Big Data is much more than Precision Ag. However, Precision Ag operations often will generate key elements of the data needed for Big Data applications.
3.Big Data in Farming: Why Matters! Big Data applications generally create predictions based on analysis of what has occurred. Uncertainty in farming, based in biology and weather, means that the science of agriculture (the Why) will need to be integrated within many of the sector’s Big Data applications.
4.Big Data: Alive and Growing in the Food Sector! Big Data already is being extensively employed at the genetics and consumer ends of the food and ag supply chain. This article will stress the potential for capabilities and knowledge generated at these levels to affect new opportunities within production agriculture.
5.A Big Data Revolution: What Would Drive It? Management within farming historically has been constrained by the fundamental reality that the cost of real-time measurement of farming operations exceeded the benefits from doing so. Sensing capabilities (from satellites, to drones, to small-scale weather monitors, to soil moisture and drainage metering) now being implemented will materially lessen that constraint. Doing so will create data streams (or is it floods?) by which Big Data applications can profoundly alter management on the farm.
6.A Big Data Revolution: Who Would Drive It? Over the last 30 years, novel applications of information technology have caused strategic change in many sectors of the economy. This article draws on those experiences to inform our thinking about the potential role of Big Data as a force for change in agriculture.
Big Data: More Than a Lot of Numbers!
Innovation has been critical to increased agricultural productivity and to support of an ever increasing global population. To be effective, however, each innovation had to be understood, adopted, and adapted by farmers and other managers.
Although Big Data is relatively new, it is the focus of intense media speculation today. However, it is important to remember that Big Data won’t have much impact unless it too is understood, adopted and adapted by farmers and other managers. This article provides several perspectives to support that process.
Big Data Defined
“90% of the data in the world today has been created in the last two years alone” (IBM, 2012).
In recent years, statements similar to IBM’s observation and associated predictions of a Big Data revolution have become increasingly more common. Some days it seems like we can’t escape them!
Actually, Big Data and its hype are relatively new. As shown in Figure 1, use of the term, Big Data, was barely noticeable prior to 2011. However, the term’s usage literally exploded in 2012 and 2013, expanding by a factor of 5 in just two years.
With all new concepts, it’s nice to have a definition. Big Data has had more than its fair share. Two that we find helpful are:
•The phrase “big data” refers to large, diverse, complex, longitudinal, and/or distributed data sets generated from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available today and in the future (National Science Foundation, 2012).
•Big Data is high-volume, -velocity, and -variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making (Gartner IT Glossary, 2012).
These definitions are impressive. However, they really don’t tell us how Big Data will empower decision makers to create new economic and social value.
From Technology to Value
In the next few paragraphs, we’ll move beyond those definitions to explore how application of Big Data fosters economic growth. In this article, we’ll present non-ag examples because today there is more experience outside of agriculture. The following articles in this series will focus on agriculture.
Big Data generally is referred to as a singular thing. It’s not! In reality, Big Data is a capability. It is the capability to extract information and craft insights where previously it was not possible to do so.
Advances across several technologies are fueling the growing Big Data capability. These include, but are not limited to computation, data storage, communications, and sensing.
These individual technologies are “cool” and exciting. However, sometimes a focus on cool technologies can distract us from what is managerially important.
A commonly used lens when examining Big Data is to focus on its dimensions. Three dimensions (Figure 2) often are employed to describe Big Data: Volume, Velocity, and Variety. These three dimensions focus on the nature of data. However, just having data isn’t sufficient.
Analytics is the hidden, “secret sauce” of Big Data. Analytics refers to the increasingly sophisticated means by which analysts can create useful insights from available data.
Now let’s consider each dimension individually:
Interestingly, the Volume dimension of Big Data is not specifically defined. No single standard value specifies how big a dataset needs to be for it to be considered “Big”.
It’s not like Starbucks; where the Tall cup is 12 ounces and the Grande is 16 ounces. Rather, Big Data refers to datasets whose size exceeds the ability of the typical software used to capture, store, manage, and analyze.
This perspective is intentionally subjective and what is “Big” varies between industries and applications. An example of one firm’s use of Big Data is provided by GE — which now collects 50 million pieces of data from 10 million sensors everyday (Hardy, 2014).
GE installs sensors on turbines to collect information on the “health” of the blades. Typically, one gas turbine can generate 500 gigabytes of data daily. If use of that data can improve energy efficiency by 1%, GE can help customers save a total of $300 billion (Marr, 2014)! The numbers and their economic impact do get “Big” very quickly.
The Velocity dimension refers to the capability to acquire, understand, and respond to events as they occur. Sometimes it’s not enough just to know what’s happened; rather we want to know what’s happening. We’ve all become familiar with real-time traffic information available at our fingertips.
Google Map provides live traffic information by analyzing the speed of phones using the Google Map app on the road (Barth, 2009). Based on the changing traffic status and extensive analysis of factors that affect congestion, Google Map can suggest alternative routes in real-time to ensure a faster and smoother drive.
Variety, as a Big Data dimension, may be the most novel and intriguing. For many of us, our image of data is a spreadsheet filled with numbers meaningfully arranged in rows and columns.
With Big Data, the reality of “what is data” has wildly expanded. The lower row of Figure 3 shows some newer kinds of sensors in the world, from cell phones, to smart watches, and to smart lights.
Cell phones and watches can now monitor users’ health. Even light bulbs can be used to observe movements, which help some retailers to detect consumer behaviors in stores to personalize promotions (Reed, 2015). We even include human eyes in Figure 3, as it would be possible to track your eyes as you read this article.
The power of integrating across diverse types and sources of data is commercially substantial. For example, UPS vehicles are installed with sensors to track the engine performance, car speed, braking, direction, and more (van Rijmenam, 2014).
By analyzing these and other data, UPS is able to not only monitor the car engine and driving behavior but also suggest better routes, leading to substantial savings of fuel (Schlangenstein, 2013).
So, Volume, Variety, and Velocity can give us access to lots of data, generated from diverse sources with minimal lag times. At first glance that sounds attractive. Fairly quickly, however, managers start to wonder, what do I do with all this stuff?
Just acquiring more data isn’t very exciting and won’t improve agriculture. Instead, we need tools that can enable managers to improve decision-making; this is the domain of Analytics.
One tool providing such capabilities was recently unveiled by the giant retailer, Amazon (Bensinger, 2014). This patented tool will enable Amazon managers to undertake what it calls “anticipatory shipping”, a method to start delivering packages even before customers click “buy”.
Amazon intends to box and ship products it expects customers in a specific area will want but haven’t yet ordered. In deciding what to ship, Amazon’s analytical process considers previous orders, product searches, wish lists, shopping-cart contents, returns, and even how long an Internet user’s cursor hovers over an item.
Analytics and its related, more recent term, data science, are key factors by which Big Data capabilities actually can contribute to improved performance, not just in retailing, but also in agriculture. Such tools are currently being developed for the sector, although these efforts typically are at early stages.
In this discussion, we explored the dimensions of Big Data — 3Vs and an A. The Volume dimension links directly to the “Big” component of Big Data. Variety, Velocity and Analytics relate to the “Data” aspect. While Volume is important, strategic change and managerial challenges will be driven by Variety, Velocity, and especially Analytics.
Unfortunately, media and advertising tend to emphasize Volume; it’s easy to impress with really, really large numbers. But farmers and agricultural managers shouldn’t be distracted by statistics on Volume.
Big Data’s potential doesn’t rest on having lots of numbers or even having the world’s largest spreadsheet. Instead, the ability to integrate across numerous and novel data sources is key.
The point of doing this is to create new managerial insights that enable better decisions. While Volume and Variety are necessary, Analytics is what allows for fusion across data sources and new knowledge to be created.
Emphasizing the critical role of Variety of data sources and Analytics capabilities is particularly important for production agriculture. Individual farms and other agricultural firms aren’t likely to possess the entire range of data sources needed to optimize value creation.
Further, sophisticated and specialized Analytics competencies will be required. To be effective, however, the computer science competencies also need to be combined with knowledge of the business and science aspects of agricultural production.
At times this sounds complicated and maybe threatening. Visiting with a farmer from Ohio about this topic recently, he made a comment that is helpful in unraveling this complexity. He noted that effective use of Big Data for him as a Midwestern farmer is mainly about relationships.
The relevant question is, “Which input and information suppliers and customers can provide the Big Data capabilities for him to optimize his decisions?” And he noted, “For farmers, managing those relationships isn’t new!”
Article Disclaimer: This article was published by Agprofessional.com and was retrieved and posted at INDESEEM for information and educational purposes only. The views, opinions, thoughts, and information expressed in this article are those of the authors. Please cite the original and INDESEEM accordingly.
This is a self-derived and motivational research initiated to test statistics and see how data mining, visualization, and predictive analytic is not a bogyman. Have fun!
Weight loss is a crucial issue in the United States today. Research shows that excessive weight increase our risk of heart and other diseases. The Body Mass Index (BMI) is one way an individual effectively monitor his or her weight to police their weight in order to decrease their risk of illnesses. Even though illnesses are caused by many factors and not just physiological and biological characteristics, the condition of physical body is paramount to reduce or increase our chances of being sick.
You may be losing more weight than you think. How to determine that depends on your patience and motivation to track, mine, analyze, and build simple predictive models that could help tell a really cool story how to manage your weight, health and money.
Having said that, I personally undertook a personal research project to monitor my weight and see how I am physically doing and to see how I can loss some extra kilograms of bodily fat and all that’s included in the package.
To this end, I downloaded the Google Play Store App called Pedometer and started to monitor my daily moments (walking) to see how that facilitate the process of weight reduction and reduced BMI score.
I will collect pedometric data for 8 months beginning August 2015 until April, 2016. Data will be collected with the pedometer app installed on my cellphone. Data are collected on a 24 hours basis. Data that will be collected include number of steps, kCal, date, time, distance cover in mile, the daily average speed in miles per hour.
Other data that are being collected include daily weight (kg), time of the day weight was recorded (usually early morning), Max Heart Rate, Heart Rate at Rest, Age ( a constant factor), Gender (dummy variable), frequency walked (i.e. the number of times a day walking was initiated, not the number of walks), and Days in Action (number of days walked).
At current, I have managed to collect pedometric data from August 2015 through November 4, 2015 and hopes to continue this for an additional 6 months. Data are not collected when I am asleep…duh:)
All you need to do is launch the app once on your cellphone and as you walk, data are generated. All you need to do is to have your phone on you and that means, in your bag, pocket, etc. No data will be collected if you leave it at your desk or your call, which I am a usual victim off. So, the more accurate data you can collect depends on your commitment to have your phone on you all the time. Also, no data will be collected if your phone battery dies. It has to be charged at all times to be able to collect data. At the end of the 24 hour, a new data sheet/template starts immediately and it is auto-correlated with time.
For my case, I usually take my phone when I am about to leave for work during weekdays and weekends as I will see fit. I put the phone away when I am ready to take night shower and that’s it.
Analysis & Anticipated Outcome
The data collected in this self-initiated study will be analyzed using advanced statistical techniques in SAS JMP Pro 12 with the goal of building a self-derived predictive model that can be customized to any individual situation or circumstances for the purposes of weight loss and BMI score normalization and or management or just to have some statistical fun with real world stuff.
Preliminary Analysis & Result
Figure 1:Graph Builder shows the predicted effects of Days in Action, # of Steps taken, #kCal burned, and Distance covered on Weight (kg) and BMI. Click on the image to view the stats.
The cubic transformation of the number of days in action seems to have decreasing effects on weight loss and BMI. As the number of days in action increases, the overall BMI and bodily weight loss decreases proportionally relative to their increasing root square values (0.85).
However, #steps seems to be fairly variable on a daily basis, but show symmetrical relationships between the predicted response. Though the number of steps changes on a daily basis, bodily weight loss and BMI scores tend to decrease when more steps are taken and increase when fewer steps are initiated.
Similar preliminary visual association can be deduced from the graph relative to #kCal and Distance (mile). This is partially due to the smallness of the data, but is expected to change as more data are collected and automatically re-run this analysis.
The essence of this visual analysis is to identify relationships that exist between or among predictor variables and how those predictor variables impact the response (predicted) variables.
More to follow:)
Here is a snapshot of the pedometric data table. Click on the image to view the data. Some data are derived variables, like time in action.
Jim Melvin, Public Service Activities October 29, 2015
CLEMSON — While researchers at Clemson University have recently announced an array of breakthroughs in agricultural and life sciences, the size of the data sets they are now using to facilitate these achievements is like a mountain compared to a molehill in regard to what was available just a few years ago.
But as the amount of “Big Data” being generated and shared throughout the scientific community continues to grow exponentially, new issues have arisen. Where should all this data be stored and shared in a cost-effective manner? How can it be most efficiently transferred across advanced data networks? How will researchers be interacting with the data and global computing infrastructure?
A team of trail-blazing scientists and information technologists at Clemson is working hard to answer these questions by studying ways to simplify collaboration and improve efficiency.
“I use genomic data sets to find gene interactions in various crop species,” said Alex Feltus, an associate professor in genetics and biochemistry at Clemson. “My goal is to advance crop development cycles to make crops grow fast enough to meet demand in the face of new economic realities imposed by climate change. In the process of doing this, I’ve also become a Big Data scientist who has to transfer data across networks and process it very quickly using supercomputers like the Palmetto Cluster at Clemson. And I recently found myself — especially in just the past couple of years — bumping up against some pretty serious bottlenecks that have slowed down my ability to do my best possible work.”
Big Data, defined as data sets too large and complex for traditional computers to handle, is being mined in new and innovative ways to computationally analyze patterns, trends and associations within the field of genomics and a wide range of other disciplines. But significant delays in Big Data transfer can cause scientists to give up on a project before they even start.
“There are many available technologies in place today that can solve the Big Data transfer problem,” said Kuang-Ching “KC” Wang, associate professor in electrical and computer engineering and also networking chief technology officer at Clemson. “It’s an exciting time for genomics researchers to vastly transform their workflows by leveraging advanced networking and computing technologies. But to get all these technologies working together in the right way requires complex engineering. And that’s why we are encouraging genomics researchers to collaborate with their local IT resources, which include IT engineers and computer scientists. This kind of cross-discipline collaboration is reflecting the national research trends.”
“Universities and other research organizations can spend a lot of money building supercomputers and really fast networks,” Feltus said. “But with research computing systems, there’s a gulf between the ‘technology people’ and the ‘research people.’ We’re trying to bring these two groups of experts together and learn to speak a common dialect. The goal of our paper is to expose some of this information technology to the research scientists so that they can better see the big picture.”
It won’t be long before the information being generated by high-throughput DNA sequencing will soon be measured in exabytes, which is equal to one quintillion bytes or one billion gigabytes. A byte is the unit computers use to represent a letter, number or symbol.
In simpler terms, that’s a mountain of information so immense it makes Everest look like a molehill.
“The technology landscape is really changing now,” Wang said. “New technologies are coming up so fast, even IT experts are struggling to keep up. So to make these new and ever-evolving resources available quickly to a wider range of different communities, IT staffs are more and more working directly with domain science researchers as opposed to remaining in the background waiting to be called upon when needed. Meanwhile, scientists are finding that the IT staffs that are the most open-minded and willing to brainstorm are becoming an invaluable part of the research process.”
The National Science Foundation and other high-profile organizations have made Big Data a high priority and they are encouraging scientists to explore the issues surrounding it in depth. In August 2014, Feltus, Wang and five cohorts received a $1.485 million NSF grant to advance research on next-generation data analysis and sharing. Also in August 2014, Feltus and Walt Ligon at Clemson received a $300,000 NSF grant with Louisiana State and Indiana universities to study collaborative research for computational science. And in September 2012, Wang and James Bottum of Clemson received a $991,000 NSF grant to roll out a high-speed, next-generation campus network to advance cyberinfrastructure.
“NSF is increasingly showing support for these kinds of research collaborations for many of the different problem domains,” Wang said. “The sponsoring organizations are saying that we should really combine technology people and domain research people and that’s what we’re doing here at Clemson.”
Feltus, for one, is sold on the concept. He says that working with participants in Wang’s CC-NIE grant has already uncovered a slew of new research opportunities.
“During my career, I’ve been studying a handful of organisms,” Feltus said. “But because I now have much better access to the data, I’m finding ways to study a lot more of them. I see fantastic opportunities opening up before my eyes. When you are able to give scientists tools that they’ve never had before, it will inevitably lead to discoveries that will change the world in ways that were once unimaginable.”
This material is based upon work supported by the National Science Foundation (NSF) under Grant Nos. 1443040, 1447771 and 1245936. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF.
Article Disclaimer: This article was published by Clemson University and was retrieved on 10/29/2015 and posted here at INDESEEM for information and educational purposes only. The views, thoughts, research findings, and information contain in the article remains those of the authors. Please cite the original and this source accordingly.
You might presume, or at least hope, that humans are better at understanding fellow humans than machines are. But a new MIT study suggests an algorithm can predict someone’s behavior faster and more reliably than humans can.
Max Kanter, a master’s student in computer science at MIT, and his advisor, Kalyan Veeramachaneni, a research scientist at MIT’s computer science and artificial intelligence laboratory, created the Data Science Machine to search for patterns and choose which variables are the most relevant. Their paper on the project results (pdf) will be presented at the IEEE Data Science and Advanced Analytics conference in Paris this week.
It’s fairly common for machines to analyze data, but humans are typically required to choose which data points are relevant for analysis. In three competitions with human teams, a machine made more accurate predictions than 615 of 906 human teams. And while humans worked on their predictive algorithms for months, the machine took two to 12 hours to produce each of its competition entries.
For example, when one competition asked teams to predict whether a student would drop out during the next ten days, based on student interactions with resources on an online course, there were many possible factors to consider. Teams might have looked at how late students turned in their problem sets, or whether they spent any time looking at lecture notes. But instead, MIT News reports, the two most important indicators turned out to be how far ahead of a deadline the student began working on their problem set, and how much time the student spent on the course website. These statistics weren’t directly collected by MIT’s online learning platform, but they could be inferred from data available.
The Data Science Machine performed well in this competition. It was also successful in two other competitions, one in which participants had to predict whether a crowd-funded project would be considered “exciting” and another if a customer would become a repeat buyer.
Kanter told MIT News that there are many possible uses for his Data Science Machine. “There’s so much data out there to be analyzed,” he said. “And right now it’s just sitting there not doing anything.”
Article Disclaimer: This article was published by QUARTZ based on a research published by MIT and was retrieved on 10/19/2015 and posted here at INDESEEM for educational and information purposes only. The views, opinions, thoughts, and research findings are those of the authors on this article.
Rising temperatures and lower rainfall have already affected crop yields in areas of southern Australia, and yields will continue to be affected, the report said.
Greater frequency and intensity of extreme weather events, like bushfires, droughts and cyclones will lead to decreased productivity across the agricultural sector, including the livestock and dairy industries.
The prospect of reduced agricultural production is a big issue for Australia, where the gross value of all agricultural commodities produced was roughly $50 billion for the calendar year ending June 30, 2014.
The agriculture, forestry and fishing sector employed 2.8 per cent of all employed Australians in August 2014, and represented 2.4 per cent of real gross value added to Australia’s economy in 2013-14, data from Austrade reveals.
Some agricultural commodities – wheat and frozen, chilled or fresh beef – are in Australia’s top ten exports.
“Between 1982 and 2012 more than half of Australia’s wheat-growing regions have improved their WUE [water use efficiency] by at least 50 per cent,” the GRDC report says.
“Many areas have achieved even more than this.”
Young farmer, Joshua Gilbert, works on the family cattle stud in Nabiac NSW.
He is the chair of Young Farmers, a sub group of the NSW Famers Association.
Mr Gilbert said farmers had already been dealing with the challenges of climate change without necessarily knowing what to call it.
However, many farmers are recognising that changing conditions on their land are due to climate change and some were making steps to protect their farms from the effects, Mr Gilbert said.
“I guess what we’ve seen is there is a lot more knowledge from younger farmers,” Mr Gilbert said.
Young farmers: why do young people choose to live on the land?
Why do young people choose to become farmers in this day and age?
He said seasonal variability, including the unknowns of rainfall and extreme weather events, have been affecting farmers for years.
The long term changes to the climate would worsen this variability, as farmers could expect more droughts and bushfires in future, the Climate Council’s report said.
SBS contacted the Australian Livestock Exporters Council and the National Farmers Federation, to ask if they were concerned about the effects of climate change on Australia’s agriculture sector. Both were unavailable for comment.
Key findings of the Climate Council’s report:
Climate change is making weather patterns more extreme and unpredictable, with serious consequences for Australia’s agricultural production
Climate change is driving an increase in the intensity and frequency of hot days and heatwaves in Australia, changing rainfall patterns, increasing the severity of droughts, and driving up the likelihood of extreme fire danger weather.
Average rainfall in southern Australia during the cool season is predicted to decline further, and the time spent in extreme drought conditions is projected to increase.
Water scarcity, heat stress and increased climatic variability in our most productive agricultural regions, such as the Murray Darling Basin, are key risks for our food security, economy, and dependent industries and communities.
Climatic challenges could result in imports of key agricultural commodities such as wheat increasingly outweighing exports.
More frequent and intense heatwaves and extreme weather events are already affecting food prices in Australia
Climate change is increasing the variability of crop yields.
Food prices during the 2005- 2007 drought increased at twice the rate of the Consumer Price Index (CPI) with fresh fruit and vegetables the worst hit, increasing 43 per cent and 33 per cent respectively.
Reductions of livestock numbers during droughts can directly affect meat prices for many years.
Rainfall deficiencies in parts of Western Australia and central Queensland are projected to reduce total national crop production by 12 per cent in 2014-15, and the value of beef and veal exports by 4 per cent.
Cyclone Larry destroyed 90 per cent of the North Queensland banana crop in 2006, affecting supply for nine months and increasing prices by 500 per cent.
The 2009 heatwave in Victoria decimated fruit crops, with significant production losses of berry and other fruit crops.
Climate change is affecting the quality and seasonal availability of many foods in Australia
Up to 70% of Australia’s wine-growing regions with a Mediterranean climate (including iconic areas like the Barossa Valley and Margaret River) will be less suitable for grape growing by 2050. Higher temperatures will continue to cause earlier ripening and reduced grape quality, as well as encourage expansion to new areas, including some regions of Tasmania.
Many foods produced by plants growing at elevated CO2 have reduced protein and mineral concentrations, reducing their nutritional value.
Harsher climate conditions will increase use of more heat-tolerant breeds in beef production, some of which have lower meat quality and reproductive rates.
Heat stress reduces milk yield by 10-25 per cent and up to 40 per cent in extreme heatwave conditions.
The yields of many important crop species such as wheat, rice and maize are reduced at temperatures more than 30°C.
Australia is extremely vulnerable to disruptions in food supply through extreme weather events
There is typically less than 30 days supply of non-perishable food and less than five days supply of perishable food in the supply chain at any one time. Households generally hold only about a 3-5 day supply of food. Such low reserves are vulnerable to natural disasters and disruption to transport from extreme weather.
During the 2011 Queensland floods, several towns such as Rockhampton were cut off for up to two weeks, preventing food resupply. Brisbane came within a day of running out of bread.
Australia’s international competitiveness in many agricultural markets will be challenged by the warming climate and changing weather patterns
Australia is projected to be one of the most adversely affected regions from future changes in climate in terms of reductions in agricultural production and exports.
Climate impacts on agricultural production in other countries will affect our competitiveness, especially if warmer and wetter conditions elsewhere boost production of key products such as beef and lamb.
If the current rate of climate change is maintained, adaptation to food production challenges will be increasingly difficult and expensive
By 2061, Australia’s domestic demand for food could be 90 per cent above 2000 levels, with a similar increase in export demand.
Transitioning to a new, lowcarbon economy is critical to avoiding the most dangerous impacts of climate change.
The longer action on climate change is delayed, the more likely it is that progressive, small-scale adaptive steps to cope with climate change will become increasingly inadequate and larger, more expensive changes will be required.
Article Disclaimer: This article was published online by SBSand was retrieved on 10/18/2015 and posted here at INDESEEM for information and educational purposes only. The views, findings, thoughts, and opinions expressed in article are those of the author and his source. Please cite the original source accordingly.