CHAOSS is a new Linux Foundation project aimed at producing integrated, open source software for analyzing software development, together with defining implementation-agnostic metrics for measuring community activity, contributions, and health. The CHAOSS community will help improve transparency of key project metrics, contributing to improve the project itself, as well as helping third parties make informed decisions when engaging with projects.
Understanding the community dynamics of open source software projects is of fundamental importance to developers, users, and decision makers, but gaining this needed knowledge is a specialized, time-consuming, and error-prone tasks. The CHAOSS community helps in this task, by highlighting aspects of the projects, tracking relevant patterns, and assisting in the early identification of problems and the detection of trends. In the addition, the CHAOSS community explores what these aspects signal, how they are related to value, and how they might be used in positive or negative ways by people. Collectively, the CHAOSS community work can be used to study the structure of a community and its growth, maturity, and decline, examine project risk and vulnerabilities, understand project diversity, and explore a project’s position within a larger software ecosystem.
Publishing these requirements as well as developing working technical systems in the open, so they can be engaged and shaped by everyone, is a step beyond in transparency -- enabling better decision making, better awareness of problems, and deeper knowledge about the project dynamics.
Starting in Linux version 3.14, a new scheduling class was introduced. Called SCHED_DEADLINE, this scheduling class implements Earliest Deadline First (EDF) along with a Constant Bandwidth Scheduler (CBS) that is used to give applications a guaranteed amount of CPU for a periodic time frame. This type of scheduling is advantageous for robotics, media players and recorders, as well as virtual machine guest management. This talk will explain the history of SCHED_DEADLINE and compare it with various other methods to deal with periodic deadlines. It will also discuss some of the current issues with the current Linux implementation and some of the improvements that are currently in development.
DeepSPADE stands for “Deep Spam Detection”, and the basic point is for machine learning to do a Natural Language Classification task to identify between spam and non-spam posts on public community fora. It uses a very deep & parallel CNN+GRU Neural Network designed in Keras and trained with a Tensorflow backend, reaching 99.1% accuracy on 16,000 test rows.
In this session, you’ll be amazed as to how DeepSPADE can augment community moderators.
From machine learning to increasing demand for data intensive customer facing apps and bullet proof scalability, the demands on open source are more than ever before.
As open source adoption in enterprises continues to grow exponentially across virtually all technology segments, their impact on the industry is having a commensurate effect. This is resulting in all sorts of changes in the open source community, within enterprises themselves and across the vendor ecosystem, both open source and proprietary. Open source enables enterprises to be more lean and efficient, to take more risks and to be more responsive to their customers but how is enterprise adoption impacting and shaping the evolution of open source.
How are enterprises adopting open source, what is working and what isn't, what actual impact are they having and is it good or bad for open source and will there ever really be a demise of proprietary software? This presentation will discuss current adoption in the enterprise using real world examples along with current trends such enterprises open sourcing their own software assets, the movement to being an "open source first company", Inner Source and the future shaped by open source. Discussed will be questions around whether enterprises are learning from their successes, their failures, their peers and the community, and what we can expect as their influence expands?
The deadline scheduler adds the ability of scheduling tasks, not according to a fixed priority, but according to a dynamic priority, based on the task’s deadline. To be able to use this scheduler, a task needs to inform three parameters: the period, the runtime, and the relative deadline.
Using these parameters, the scheduler tries to provide the runtime CPU time, at each period for each deadline task. Under the perfect conditions, the sched deadline is able to schedule all tasks within their deadline, providing the timing guarantee real-time tasks need. Did you notice the under the perfect conditions part? The conditions are:
- Implicit deadline tasks – or constrained being quite a pessimist.
- Tasks should not self-suspend;
- All the system’s delay must be taken into account.
- The runtime must represent the worst-case execution time;
- The system should not be overload – which requires some very restrictive setup.
All these restrictions open the opportunity for improvements in the deadline scheduler. This presentation aims to list these points of improvement, point directions and challenges. Such as:
- Constrained deadline tasks guarantees
- Arbitrary affinity tasks
- Hierarchical scheduling – RT Throttling
- Tracepoints
- Precise way to define task’s runtime
- Other possibilities for admission tests
There are many points of improvement in the deadline scheduler, and discussing them is fundamental for a wider and safer adoption of this powerful scheduler.
First, a few words about what this talk is not. It is not a tutorial on how to program quantum computers. For that, you should find a D-Wave machine or go to http://research.ibm.com/ibm-q/, either of which should provide an excellent hands-on introduction to the current practice of quantum computing. Either way, highly recommended!
This talk instead gives an overview of the current state and trends of quantum-computing technology. It then uses these trends to make some educated guesses about the challenges facing the use of quantum computing in production. Of course, the bigger the killer app, the more effort will be invested in overcoming these challenges. This talk therefore also gives an overview of quantum computing’s most likely killer apps. This will lead into some possibilities of how quantum computing might affect the Linux plumbing, and vice versa. The talk will conclude with the usual free advice, which will be worth every penny that you pay for it.