Over the last decade, Black Duck by Synopsys has recognized some of the most innovative and influential open source projects launched each year. This recognition is a tribute to the success and momentum of these projects, and affirmation of their prospects going forward. We’ve seen honorees like Kubernetes (2014), Docker (2013), Ansible (2012), Bootstrap (2011), NuGet (2011), and OpenStack (2010) evolve to become some of the most influential open source projects in the market. We expect this year’s rookies to be no exception.
Spanning an array of functions and technologies, the 2018 Open Source Rookies of the Year invest their efforts in everything from autonomous driving, scalable blockchain, and VNF orchestration to personal security and relationship management.
The world’s cellular networks are the focus of innovation as users require increasingly complex and resource-intensive technologies. Software-defined networks (SDNs) and virtual network functions (VNFs) have laid the foundation for modern carrier performance, and with 5G on the horizon, an open solution for rapid and automated VNF orchestration is critical to the industry’s next great leap.
AT&T approached the Linux Foundation to create what is now the Open Network Automation Platform (ONAP), an outgrowth of previous open source projects (OPEN-O, OpenDaylight, OPNFV, OpenStack), representing the aggregate of efforts from major carrier players like Huawei and China Mobile. ONAP builds on these projects to enable the virtualization of existing carrier networks by automating traffic management and resource allocation. Supported by carrier members, whose subscribers represent 60% of the worldwide market, ONAP has quickly risen to be the most prominent open source VNF orchestration platform.
Now, with a year behind it and the Linux Foundation’s networking division at the wheel, ONAP is poised to tackle some key advancements in the coming year, including container integration for increased VNF deployment flexibility, increased carrier membership, services support for autonomous vehicles and virtual reality, and their second release at the end of May.
Put aside your cryptocurrency exhaustion for a moment, and think about what blockchain technologies mean for generating secure, authentic data records, and the role they play in quelling the creation of counterfeit transactions and data manipulation. The RChain co-op takes it a step further, seeking to establish the possibility of building a scalable, secure, and sustainable blockchain. With it, they intend to implement a decentralized, immutable, and global compute infrastructure.
The RChain platform allows concurrent, parallel execution of smart contracts by running them on the inherently concurrent Rho Virtual Machine. RChain’s unique sharding architecture—based on a framework of namespaces—effectively establishes multiple blockchains per node, each running independently. This enables RChain to deliver enterprise-class scalability and unprecedented transactional throughput. The idea here: By using a blockchain to store the state of a truly concurrent virtual machine, you can build a high-performance compute infrastructure that can’t be taken down by malicious adversaries.
Working closely with the Ethereum community, the RChain co-op uses the Casper proof-of-stake protocol and correct-by-construction design—a fundamental departure from the resource-intensive proof-of-work approach employed by the most popular blockchains today. These distinguishing characteristics of RChain reflect some of the co-op’s core values of reducing energy consumption and resource dependence to present children with a sustainable and well-coordinated world. The results of their efforts are beginning to materialize, and the co-op is aggressively targeting subsecond block latency, with 40,000+ transactions per second.
Today’s social ecosystem is a complex one. With increasingly distributed friendships, extended work affiliations, and a diverse array of technologies designed to keep us “connected,” there is a burden on our memories to sustain order among chaos. For those with physical and mental conditions that impair social cognition, these complexities can be significant.
Monica is a self-proclaimed personal relationship management system that seeks to catalog and strengthen relationships through easy-to-use technologies. Monica combines the flexibility of a classic Rolodex with the dynamic versatility of modern social networks and turns to the open source community both to evolve the technology and to derive the strategy. Open sourced in June 2017 and promoted on a popular community site, the Monica project drew more than 600 comments and 1,100 upvotes in 2 days. Garnering attention worldwide, Monica saw more than 7,000 registrants and 100 pull requests within a week.
Monica’s user base is a testament to its sincere intentions and transparent operations, which benefit everyone from socialites, parents, and children to prisoners, Alzheimer’s and dementia patients, those recovering from traumatic brain injuries, and those with autism spectrum disorders. To date, the Monica project has seen some of its greatest participation stem from the Asperger’s subreddit, which has lauded the project and actively provided feedback and direction on feature enhancements to benefit the community. Now Monica is setting its sights on broader community adoption and deeper integrations, emphasizing the role of these various communities and partnerships to accomplish this ambitious goal.
For more information, visit www.monicahq.com.
Over the last year, major vehicle manufacturers have been clamoring to be among the first to market with a safe and reliable autonomous vehicle. Volkswagen, Volvo, BMW, Audi, Tesla, Uber, Google, and even Amazon are pushing boundaries with proprietary innovation and exclusive partnerships. This approach has one drawback: a cluttered arena of distinct approaches to autonomous driving.
Baidu is seeking to clear a path for innovation with Apollo, an open autonomous driving platform and flexible architecture. With its first version launched in July 2017, Apollo enables Tier 1 providers, OEMs, and startups to build their own autonomous vehicles without the burden of “reinventing the wheel.” Organizations can accelerate development by drawing on the collective expertise of Apollo partners and Apollo’s unique simulation engine, which contains tens of thousands of autonomous driving scenarios (ADSs), to rigorously test autonomous driving algorithms. Now organizations can verify that their algorithms meet basic regression rules before they get to the road, without the burden of deriving the test data from scratch.
With 50 partners for its 1.0 release, and more than 90 after its 2.0 release in January, Apollo is working fervently to extend its integration to additional hardware platforms to accommodate a wide variety of sensor, compute, and modeling technologies. Apollo’s latest release includes functions to improve performance on urban roads, multisensor support, obstacle perception, traffic light detection, and enhanced security features. Moving into 2018, the Apollo team is focusing on productization requirements to extend innovation to a greater portfolio of businesses, while upholding one of its core tenets: Autonomous driving data belongs to humankind, not solely to the company, and the more we all contribute, the more we will benefit from this project.
For more information, visit apollo.auto.
There is a well-established market for digital security. Safeguarding sensitive data in transit and at rest, monitoring network traffic for anomalous or malicious activity, and securing endpoint devices are all areas rife with custom solutions to meet whatever your digital security needs may be. But there remains a dire gap in the use of technology to protect the people and the environment surrounding an endpoint device, and the consequences can be severe for journalists, human rights defenders, officials, and civilians traveling the world and yielding their personal security for something greater.
In 2017, the Guardian Project began work on Haven, in collaboration with the Freedom of the Press Foundation (FPF), to identify key features and functions and to co-design a solution. The goal: to create a personal physical security application to transmit situational awareness of the environment surrounding a mobile device. Haven uses secure communications technologies, like Signal and Tor, and the sophisticated hardware already present in Android-based endpoint devices to deliver critical insight and enable strategic action to protect the people or assets being monitored.
Haven relies on a forked version of the SecureIt open source project for motion detection, adds a secure database structure, and layers on code to use sensor hardware. In 2018, the Haven team is keenly focused on enhancing the solution to use encrypted end-to-end messaging and the Guardian Project’s CameraV for evidence-grade photo and video capture. The team hopes to use machine learning to reduce false positives, extend support to additional peripheral sensors, and enable syncing of multiple Haven-enabled devices over a Tor network.
For more information or for partnership opportunities, visit guardianproject.github.io/haven.
In software development, each contribution to a body of work has the hallmark of its originator, giving a unique personality to the code that composes an application or component. This can be a hurdle for development teams to overcome, with style inconsistencies making code review difficult and leading to costly disagreements over style. Similarly, varied functionality among development tools means that cooperation and productivity are governed by the limits of the tools themselves. These impediments slow progress and drive a wedge between workgroups.
For more information, visit prettier.io.
The world has changed remarkably over the last few decades—from command line interfaces, to graphical interfaces, to touch screens. The next great horizon can be seen—or heard—in the daily speech patterns of citizens around the globe. But language and dialect are as individual as a fingerprint, and technology must learn to decipher the subtle context, implications, and complex structures of human speech.
In June 2017, Mozilla’s Open Innovation team launched Common Voice with the goal of establishing the world’s largest open collection of human voice data to provide startups, innovators, and research universities with reliable datasets with which to train machine learning models for speech technologies. Currently, Common Voice is used to train Mozilla’s TensorFlow implementation of Baidu’s DeepSpeech architecture, as well as Kaldi (the speech recognition toolkit that was core to the development of Siri). The project’s goal is to collect up to 10,000 hours of speech for as many distinct languages as possible.
Common Voice has seen remarkable accelerated growth, supported by eager, vocal contributors and technology collaborations, such as with Mycroft, Snips, Dat Project, and Bangor University in Wales. Today, Common Voice represents the second-largest open speech dataset, with more than 500 hours of English voice data collected from 112 countries. To put that in perspective, the public collection of TED talks constitutes about 200 hours, while LibriSpeech, which is essentially public domain Books on Tape, represents about 1,000 hours.
The platform has also been adapted by communities to collect Macedonian and Welsh, with community translations of the site already underway for 17 new languages, to be enabled for people to contribute their voices later this year.
To learn more and to contribute your valuable speech sample, go to voice.mozilla.org.