35
A brief history of BOINC Since this is the 10 th BOINC workshop I’d like to step back and talk about The origins and history of BOINC The people who have contributed to it Its goals and accomplishment Its failures, why they happened, and what we can learn from them 1985 I got my CS PhD and arrived at UC Berkeley. My background was mostly in Math and theoretical CS, but I was interested in system software, and especially in scheduling – the question of what order you should do things in. The Internet was coming into being, at least on university computers running Unix. At that point the hot topic in computer systems was shared-memory multiprocessors, and companies like Sequent were starting to make computers with a lot of processors in one box. This seemed odd to me, since there were already lots of CPUs sitting around, mostly unused, in desktop computers, and they could communicate via the Internet. To me, the Internet could be viewed as a backplane holding a lot of processor boards. 1987 So I did various research projects involving distributed computing. A student and I built a system called Marionette for doing master/slave computing on Unix systems, which in those days were Sun 3/50 workstations. Marionette used rsh to do things remotely. It solved the heterogeneity problem by shipping source code to each worker host and compiling it there. We used it to create a distributed Mandelbrot set renderer, and we wrote a paper about it, and like most academic projects it ended there. 1992

boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Embed Size (px)

Citation preview

Page 1: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

A brief history of BOINC

Since this is the 10th BOINC workshop I’d like to step back and talk about

The origins and history of BOINC The people who have contributed to it Its goals and accomplishment Its failures, why they happened, and what we can learn from them

1985

I got my CS PhD and arrived at UC Berkeley. My background was mostly in Math and theoretical CS, but I was interested in system software, and especially in scheduling – the question of what order you should do things in.

The Internet was coming into being, at least on university computers running Unix.

At that point the hot topic in computer systems was shared-memory multiprocessors, and companies like Sequent were starting to make computers with a lot of processors in one box. This seemed odd to me, since there were already lots of CPUs sitting around, mostly unused, in desktop computers, and they could communicate via the Internet. To me, the Internet could be viewed as a backplane holding a lot of processor boards.

1987

So I did various research projects involving distributed computing. A student and I built a system called Marionette for doing master/slave computing on Unix systems, which in those days were Sun 3/50 workstations. Marionette used rsh to do things remotely. It solved the heterogeneity problem by shipping source code to each worker host and compiling it there. We used it to create a distributed Mandelbrot set renderer, and we wrote a paper about it, and like most academic projects it ended there.

1992

I didn’t get tenure at Berkeley, so I left academics and worked at a couple of startups, building various kinds of commercial distributed systems. I enjoyed writing software that was widely used.

1995

Several things had happened by now:

The web had been invented many consumers, not just geeks and accountants, had a PC PCs had become powerful; Pentium processors were doing hundreds of megaFLOPS (a few years

earlier, 1 megaFLOPS was fast)

Page 2: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

The Internet had spread to the consumer market, with dialup ISPs like Compuserve and AOL.

In January 1995 my friend Dave Gedye visited me. Dave had been a CS grad student at Berkeley when I was there, and he was working at a company in Seattle called Starwave. He told me very excitedly about his idea for a project that would use home PCs to do SETI by analyzing digitized radio signals. He had located a group, led by Dan Werthimer, that was doing radio SETI using the Arecibo telescope; they happened to be at Berkeley also. Dave came up with the name SETI@home, though we considered a few others.

We had a series of meetings to work out the parameters of the system: what frequency band would we record, how would we ship and store the data, how would be divide it into chunks, what analysis algorithms would we use, how long each job would take, and so forth.

We concluded that if we could get 50,000 volunteers, we'd have enough FLOPS to do useful science. I thought that we could build a server that could send jobs fast enough to keep them all busy.

We wanted the client program to work as a screensaver - to compute when the PC is idle, and show a visualization of the computation. We knew that Windows was the most important platform. I'd never used Windows before, but I hastily got a PC with Visual Studio and learned enough to write some prototype visualizations.

The one we ended up using was inspired by the computer consoles from Star Trek Next Generation. It shows a 3D graph of power as a function of frequency and time.

1998

We were doing this in our spare time, but we knew that to actually do the project, we’d need to hire a couple of people, and that meant we’d need some money. Over the next 3 years we spent a lot of time asking companies for money. Starwave gave us a chunk, maybe $10K.

Then we got a grant from a private group called The Planetary Society, for about $100K, with a matching grant from the state of California. We used that money to hire a Windows programmer and a Mac programmer to do the respective GUIs. I wrote the client/server software and the web software. Eric Korpela and Jeff Cobb wrote the signal-processing code. We based the project at the UC Berkeley Space Sciences Lab.

1999

We worked extremely hard, especially with testing; we wanted to avoid a PR disaster at all costs. Eventually we released it to the public on May 17, 1999. It was a big hit, and we got a lot of coverage in mass media, including national TV. We were celebrities for a month or two.

Our server complex initially consisted of 3 Sun Sparcstations: one running an Informix server, one running the web server and scheduler, and one doing everything else. It quickly became overloaded. We got Sun to donate some dedicated servers, and we hastily rewrote the server software to increase

Page 3: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

its efficiency. Eric Korpela came up with the idea of keeping a cache of jobs in shared memory, to reduce the scheduler's database load.

One morning, maybe 6 months in, we looked at the overnight stats. We'd accumulated 1000 years of CPU time in 1 day. We knew we were onto something.

I had a great time working on SETI@home. For one thing I was now writing software that was used by LOTS of people. For another, I wasn’t just writing software, I was involved in the science. I’m a science junkie, so this was very exciting.

By this time several other VC projects had started: GIMPS and distributed.net, in around 1997, and Folding@Home, in 1999.

2000

The work we did on SETI@home was divided between science and infrastructure. The infrastructure part - software, hardware, and sysadmin - took up most of our time. The SETI@home client was monolithic; a single executable included the science code, the screensaver graphics, and the infrastructure part - fetching and reporting jobs. This meant that whenever we wanted to change the science algorithm (which we wanted to do frequently, as it became clear how much processing power we had) all users had to manually download and install a new client. It was clear that we had to change this.

I was looking for a way to outsource this, so SETI@home could concentrate on science. I was also looking for a way to make a living, since my last startup went belly up in 1999.

In Jan 2000 I was approached by three people who were starting a company, called United Devices, to make money with volunteer computing. The idea was to get people to run a client that would spend part of the time doing good-of-humanity projects, and the rest doing jobs for paying customers like pharmaceutical companies.

I had misgivings about this model, and about the people, but I agreed to join the company as CTO. With some arm-twisting, I got my SETI@home colleagues to agree to the idea of moving SETI@home to the UD platform.

I spent most of 2000 attending VC pitches and writing the first version of the UD software, while still working on SETI@home. Eventually UD got some investment and hired programmers, and I spent a year commuting to Austin TX.

2001

I got tired of commuting and in 2001 UD rented office space in Berkeley and hired a couple of programmers to work with me there. UD also hired the entire team of programmers from distributed.net. Soon, a rift between the Austin and Berkeley branches developed and I disengaged from the company.

Page 4: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

2002

By early 2002 it was clear that the UD adventure was a failure, and that we weren't going to move SETI@home to UD.

I communicated with Myles Allen from Oxford, who in 1999 had proposed using volunteer computing for climate modeling. I really wanted to help him do this.

So I started working on BOINC. My 2 years at UD wasn't a total waste because any time you write a piece of software twice, the second version is way better than the first.

I wanted the BOINC project to focus on developing software, not running web sites or servers, or generating publicity, or creating a "brand". So I designed BOINC so that scientists could create their own volunteer computing projects, using their own servers. It should be so easy to create a project that pretty much any computational scientist could do it. Projects would be independent. Volunteers could attach to any set of these projects.

Idealist goals

I wanted BOINC to enable an "ecosystem" in which there would be hundreds or thousands of projects, with new ones every day, competing for a large but finite pool of computing power. To get computing power, scientists would do public outreach - they'd create web sites educating the public about their research, they’d get media coverage, and they'd maintain a dialog with volunteers through their message boards.

The projects that were doing better science would get more computing power. Furthermore, the notion of "better science" would be determined by the public, not by funding agencies and academics.

As a dual benefit, volunteer computing would change the public. They'd learn about science. They'd think of science as something they were part of and could influence. They'd learn about the scientific method, and about the value of skepticism. Instead of thinking like selfish individuals, they’d think like members of an enlightened species in a global ecosystem.

In summary, I wanted BOINC to turn the world's home computers into a giant computing resource; BOINC would revolutionize science, transform the public, and change the world.

I had recently read Freakonomics, and I was intrigued by the idea that you can influence people's behavior by defining a "game" where their winning strategy is the way you want them to behave. I thought of BOINC in these terms.

I thought that volunteer computing would be embraced by scientific funding agencies, since it would give them lots of computing power for very little money.

I thought that volunteer computing, with all its technical challenges, would become a major area of computer science research, that volunteer computing conferences and journals would spring up.

Page 5: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

I had visions of personal glory as well: I'd be the Godfather of the entire field, giving keynote talks and writing invited papers, like Ian Foster in the Grid world. This would compensate for my earlier failure in the academic world.

Almost none of these things actually came to pass. Later on I’ll speculate as to why.

The development of BOINC

So in the spring of 2002 I took my laptop to a cafe every day, designed BOINC, and implemented it. The initial checkin, on April 10 of that year, says

"the system is fairly feature-complete and runs a number of test cases correctly. Currently runs only on Linux".

I chose the name BOINC because I wanted it sound fun, like a cartoon sound effect. It took a while to work out the acronym. Later I learned that "boink" is slang for sex in some places, and in retrospect I probably should have chosen a different name.

Lots of new open-source technologies were emerging at this point, and I used many of them in BOINC. The server used MySQL. I learned PHP and used it to write the web site code. We used OpenSSL for RSA encryption. I learned OpenGL and used it to write the new SETI@home screensaver graphics. I used XML for representing complex things like jobs, and for exchanging data among the various parts of BOINC. I developed a design philosophy of using SQL for basic info, and storing complex info in XML inside SQL blobs.

With this combination of tools it was very easy to add features to BOINC, and we went hog-wild, adding lots of user preferences, including project-specific preferences for customizing the screensaver graphics. We borrowed the idea of shared-memory job cache from SETI@home.

BOINC addressed several problems that had plagued SETI@home:

We added a notion of "credit" based on FLOPs rather than number of jobs We added a mechanism for validating jobs using replication, to detect bad results due to

hardware errors and also to prevent credit cheating. This turned out to be kind of tricky because of numerical differences between processors.

We allowed the client to buffer multiple jobs, so that it wouldn't run out of work if the project was down for a while, or the client was disconnected.

We used encryption-based code signing to limit the damage that hackers could do if they broke into a project's server.

I got some help people who had worked on SETI@home. Hiram Clawson created the autoconf files and get it working on Solaris. Eric Heien worked on many parts of BOINC, and set up its initial source code repository using CVS.

NSF funding

Page 6: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

In Feb 2002 I submitted a grant proposal to NSF, and in July 2002 I learned that it was funded. The program officer, Mari Maeda, took a gamble on BOINC. Kevin Thompson from NSF has also been a champion of BOINC over the years.

I used the money to create a job for myself, that of Research Scientist at SSL. I hired 3 UC Berkeley undergrads in the summer of 2002, who worked on early versions of the PHP web code. Seth Cooper developed the first Windows GUI, which was part of the client.

Early contacts

In May I visited Climateprediction.net at Oxford. They had developed a prototype with the help of a local company, but they wanted to move to BOINC. Their app was much different from SETI@home - it had long jobs and big output files, and they wanted to have data servers all over the world. I added various features to BOINC to accommodate it, such as trickle messages and "compound applications" consisting of more than one program.

I also met Carl Christensen and both Tolu Aina, who went on to contribute huge amounts to BOINC. In particular he integrated libCurl with the BOINC client, and later implemented the Quake Catcher Network.

In May I met Derrick Kondo, a grad student at UC San Diego. He was interested in performance analysis issues in volunteer computing, and we had a long collaboration in which I supplied him with data from SETI@home, and he wrote papers with me as co-author. He also wrote papers about the relative costs of volunteer computing and clouds, and he developed an Amazon EC2 BOINC server image.

In August I visited Vijay Pande and Folding@home at Stanford. I tried to sell him on the idea of moving Folding@home to BOINC. He liked this idea and over the next year or so they worked on it a little, but they had a lot invested in their own software framework, and had a lot of smart programmers working on it, so this effort was eventually abandoned.

2003

UD lawsuit

In April of 2003 I was sitting at my desk at SSL when a policeman came in looking for me. He put a thick stack of papers on my desk. UD had filed a lawsuit against me, claiming that I had stolen their trade secrets and used them in BOINC. They had managed to get a judge to issue a court order requiring me to stop working on BOINC!

When my pulse returned to normal I read the lawsuit to see what trade secrets they were talking about. One was "an interface to shared memory consisting of create, attach, detach and destroy operations". Another was the use of function names starting with "db_" to access a database. Another was the use of a shared-memory job cache for the scheduler.

Page 7: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

I started laughing. The whole thing was bogus. I contacted the university's legal department and we arranged a meeting with UD. I showed them, in great detail, the frivolity their lawsuit. They agreed to drop it if I would re-implement the database interface layer, which was something I wanted to do anyway.

More development

That summer I hired more students work on the web code. We implemented the user profile and message-board features. For the latter we had a build-versus-buy decision. Early versions of PHPBB existed. We thought hard about using them, but decided against it. I think this was the right decision, although maintaining our own forum software was a pain for years to come.

That summer Karl Chen worked for BOINC. He was a Python hot-shot. He wrote the very powerful scripts for starting/stopping projects, and for installing applications (since rewritten in PHP).

The design of the server software assumed its final form. The key idea was that a single program, the transitioner, contained all the logic related to timing out jobs and dealing with failed jobs. It interacted with the scheduler, validator, assimilator, and job creation programs via the database. This design had some big advantages:

it eliminated various race conditions it allowed the server to handling client requests quickly even when the server as a whole was

bogged down. It was highly scalable

Interest from LIGO and CERN

In October I was contacted by the LIGO project, which was building gravitational-wave detectors, and was thinking of doing a volunteer computing project to look for a particular type of wave that pulsars would emit. My main contact there was Bruce Allen, a brilliant physicist who was also a brilliant hacker, which he did for fun. Bruce quickly became extremely interested in BOINC, especially the guts of its server.

Over the next couple of years Bruce and I communicated a great deal, often talking on the phone once a day or more. At one point Bruce's wife Maria Alessandra got kind of irritated by this and started referring to me as Bruce's "girlfriend".

Bruce added a mechanism called "locality scheduling" to BOINC to support the needs of Einstein@home. He also fixed numerous bugs in all parts of the server, and helped me clarify and improve its structure.

Over the years Bruce has supported BOINC in many ways. He wrote us into NSF proposals, he funneled money to us when we were temporarily broke, and recently he supported the development of the BOINC Android GUI.

Page 8: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

In Nov 2003 I was contacted by Francois Grey, who at that point was PR director for the CERN IT division. He invited me to visit CERN and give a talk. I did this in Feb 2004, and we laid the groundwork for LHC@home, which used BOINC to simulate the new accelerator using superconducting magnets.

Francois turned out to be a tremendous champion of volunteer computing and citizen science in general. On this visit I also met Ben Segal, who was also a driving force for volunteer computing at CERN, and later spearheaded the use of virtual machines at CERN and in BOINC.

Nov: I talk with IBM World Community Grid. They decide to use United Devices instead of BOINC.

I was delighted by the way things were heading, especially the interest from big science projects.

2004

Rom and Charlie

Toward the end of 2003 I corresponded with Rom Walton, who worked at Microsoft at that point. He did a bunch of volunteer tasks, and early in 2004 I suggested he come work for me. He came down for an interview, and I hired him. We briefly talked about him moving to the Bay Area, but this wasn't practical so he worked remotely, so he worked remotely, first in Seattle and later in Florida.

Rom has done a million things for BOINC. He and I are complementary. I like to design and write code; Rom does this too, but he does other stuff

Windows installer stuff; Installshield wxWidgets virtualbox

He also does many other functions

testing and release management sysadmin of our servers Trac, mediawiki Pootle

Later I hired Charlie Fenton half-time; he's a Mac expert who had been working on SETI@home since its early days. Charlie has been working for BOINC since then, doing all the Mac-related stuff and also working on the Manager and on GPU enumeration and OpenCL.

This has been the core team ever since.

Anonymous platform

Feb: by this time SETI@home was open-source, and people were optimizing it for particular CPUs and porting it to new architectures. To accommodate this, I added the "anonymous platform" mechanism to BOINC, in which the client has its own app versions instead of getting them from the server.

Page 9: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

lets you build from source if worried about security handlers platforms that the project doesn't know about lets people use optimized apps

This was quickly used to support SPARC/Solaris, FreeBSD, etc.

GUI RPC and the ManagerI realized that we needed to decouple the BOINC client from its GUI, and we moved to an architecture where they're separate programs communicating over a TCP connection. This had big advantages:

software engineering remote control of clients 1 GUI controls multiple clients 3rd-party GUIs (BoincTasks)

Rom developed a new GUI, the BOINC Manager, using the WxWidgets cross-platform toolkit. Bindings for the RPCs are available in C++, C#, and Java, and have been used to make many GUIs, including one based on curses.

It became clear that we needed a notion of cross-project credit; otherwise volunteers would lock into projects and not try new ones. This required a notion of cross-project user identity, which was tricky because identity we had based identity on email address, and we had to avoid revealing email addresses.

The client's scheduling policies became more sophisticated. We moved to preemptive scheduling, so that a small job or one with a close deadline could preempt a longer job.

We added the "sticky file" mechanism.

Scalability: We make the upload/download directories into hierarchies.

We realized that if a project processes millions of jobs per day, its DB will become bloated. So we added db_purge, and viewed the job table as a buffer rather than a permanent archive.

Initial projects

In August the first BOINC-based project, Predictor@home, launched. This was created by Michela Taufer, a post-doc at the Scripps Institute. Michela designed and implemented BOINC's "homogeneous redundancy" mechanism, which sends job replicas to numerically identical hosts. She needed this because her molecular simulations were numerically unstable.

SETI@home launches: Eric Korpela; maintains Unix build system

BURP (Big and Ugly Rendering Project) launches. This used BOINC to do ray-tracing rendering for animated movies. It was the first “hobbyist” project, which delighted me. It was created by Janus Kristensen, from Denmark. Janus contributed huge amounts to the BOINC PHP, e.g. translation and

Page 10: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

many forum features such as moderation. He also did a prototype of using BitTorrent to distribute large files to BOINC clients. (show Big Buck Bunny?)

August: Climateprediction.net launches

September: LHC@home launches

Supercomputer talk

Nov: talk at IEEE Grid conference (part of Supercomputer 2004). This was the first talk about volunteer computing at a CS conference. At that point, Grid was the big bandwagon buzzword, the way Cloud and Big Data are today. In my talk I pointed out that volunteer computing was currently doing more actual computing than any Grid, and that its potential was far greater, by orders of magnitude. I wanted to make people wake up and think; I wanted a reaction. What I got instead was complete silence. This was my first inkling that maybe volunteer computing wasn’t going to be embraced by the computing world.

Account managers

Sept: I was approached by Matt Blumberg, who was interested in promoting computational cancer research. He wanted to figure out how to make BOINC easier to use, and in particular to have a single web site where people could register and choose projects. Working together, we designed BOINC's account manager architecture (released in June 2005) and Matt used this to create GridRepublic. Matt has continued to be very active in BOINC, is responsible for Intel's Progress Thru Processors, and is involved in Charity Engine.

The account manager mechanism is general: it lets you put a level of indirection between clients and projects, and can be used in ways other than what GridRepublic does.

2005

Feb 2005: Einstein@home launches. Bruce hired several very sharp people to work on it, and they’ve contributed a lot to BOINC. Bernd Machenschalk: got the screensaver graphics framework to work on all platforms, and has continued to help develop and fix bugs in BOINC's API. Oliver Bock helped us transition the BOINC source code tree to Git. Reinhard Prix helped a lot with the Unix build system.

June: Primegrid launches. This hobby project was created by Rytis Slatkevicius from Lithuania. Since then Rytis has contributed large amounts to the BOINC web code, especially forums and teams. He has also worked for a number of projects, including GridRepublic, Charity Engine, and GridOctane.

Sept: Rosetta@home launches

Workshop

Francois Grey suggested that we hold a meeting of people using BOINC, which held in July at CERN. He called it "The First Pangalactic Workshop on BOINC", in reference to SETI@home. Carl, Tolu and Bruce

Page 11: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

attended. Since then we’ve had a workshop every fall, in various European cities, including Barcelona, Hannover, London, Grenoble, and 2 more in Geneva.

WCG

In April I was contacted again by IBM WCG, which had decided to switch from United Devices to BOINC. In Nov the BOINC version of WCG launched. I spent lots of time talking with their main technical person, Kevin Reed. Like Bruce, Kevin became extremely interested in BOINC, especially the server. Other the years he contributed a tremendous amount to it. He was very aware of WCG's position in the BOINC community. Most of his contributions dealt with problems that WCG was having, but he implemented solutions in a general way that could be used by all BOINC projects.

Among other things, Kevin designed BOINC's adaptive system for estimating job runtime and assigning credit, and also for the multi-size app mechanism, where an app can have jobs in a range of sizes to accommodate widely differing processor speeds.

WCG has been an ideal project from my point of view. They’ve generated lots of scientific results and high-profile publications. They’ve done a good job of keeping their users informed and involved. And they’ve been very helpful to the BOINC community in general; for example they worked with us and with Einstein@home to develop and promote the BOINC Android client.

Development

Feb: make web code translatable. This has evolved into everything being translatable, including project-specific web text.

May: Mac installer and GUI released.

Aug 2005: libcurl added by Carl Christensen

Alpha test

Dec: We changed the way we test the client. We had assembled a group of volunteers, and up to this point we’d tell them about test releases, and wait for them to tell us what didn’t work. Some disasters – public releases with major bugs – revealed that this didn’t work. We created a system where there’s a list of explicit tests. The volunteers do these tests, and report the results (positive or negative) through a web interface. We release versions only if they have a certain number of positive reports for all tests on each platform.

2006

Proteins@home: École Polytechnique in Paris. Spinhenge Quantum Monte-Carlo at home (Univ of Munster) Chess960@home

Page 12: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Reisel Sieve Rectilinear Crossing Number Tanpaku (Tokyo Univ. of Science) SIMAP (TU Munich) Malariacontrol.net (Swiss tropical institute) Feb: CPDN “Climate Change”. BBC documentary. We deploy specialized installer.

Feb: SZTAKI desktop grid launch. SZTAKI has developed various technology around BOINC, such as support for code-signing with X.509 certificates, DCAPI, Genwrapper, 3G Bridge, and their own VBox wrapper. I’ve worked with several people on these projects, notably Attila Marosi.

I couldn’t help noticing that there were no new American projects. It was clear that my grand vision wasn’t unfolding exactly as planned.

Account managers and stats sites

Apr: BoincStats launch; Willy de Zutter has driven development of account manager protocol

May 2006: client that supports account managers

May: BOINC Account Manager (BAM!) launch

Aug: GridRepublic launches

Development

Changed the way application graphics work; graphics now produced by a separate app, that can communicate with the main app by shared mem if it wants. Did this in SETI@home.

June: BOINC wrapper released

Aug: reworking of preferences code to handle multiple venues better, by Christian Beer. Christian has contributed tons to BOINC: Rechenkraft.net, RNA World, Virtual Server image, Twitter bootstrap

Add notion of “secure mode” (account-based sandboxing) to client. Default on Mac; not default in Win because it doesn’t allow GPU computing.

Volunteership

We noticed that the volunteer population was stagnant. It peaked about about 650,000, and has gradually declined ever since.

Do a web-based poll of BOINC participants; 35,000 respondents. 92% male. Lots of complaints about complexity.

BOINC Manager Simple View

Page 13: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Skype-based volunteer help

2007

ABC@home (Leiden Univ) Leiden Classical Lattice (U. Maryland) SHA-1 Collision Search (Graz U. of Tech) Superlink@Technion: Mark Silberstein. Integration of cluster/grid/BOINC. WCG completes transition to BOINC Yoyo@home launches (July). Umbrella project; originally ran distributed.net, now others.

Developed by Uwe Beckert, who has helped in enhancing the BOINC wrapper. Enigma@Home (decode U-boat messages)

April: we switched to Trac and subversion.

BOINC-Wide teams added

BOINC All Project Stats: Markus Tervooren

After an initial burst in 2005, media coverage of BOINC had dried up. Magazines like Popular Science, Scientific American, PC World, and Wired hadn’t published any articles about volunteer computing (and they still haven’t). I tried to volunteers to help with this, but contacting the editors of these magazines, but it didn’t work.

Discussions w/ Mark McAndrew about “Charity Engine”: a reincarnation of the UD model, with some twists. Mark is a visionary who has contributed some great ideas to BOINC, such as using it for massive distributed storage. He also contributed $60K to keep BOINC afloat during a period when our NSF funding had run out.

Bossa and Bolt

In 2006 I did some work on a project called Stardust@home, which was the first “distributed thinking” project; it used volunteers to look for interstellar dust particles in microphotographs of aerogel.

I was intrigued by the analogy with volunteer computing: in each case there was a pool of workers with varying ability levels, including some that might be malicious, and we needed to use scheduling and statistics to make them into a quantifiably reliable system.

It seemed to me that this approach might be useful in a lot of scientific areas, so I developed a middleware system called Bossa to support projects like Stardust@home. Bossa is a small amount of PHP code. It defines some database table for keeping track of jobs and statistics about volunteers, and has some logic for deciding what job to send to a given volunteer. It uses BOINC’s PHP code for volunteer registration and web-site features like message boards.

Page 14: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Unfortunately Bossa got used only by 1 or 2 projects. I worked with an archaeologist to use it to find hominid fossils in the Olduvai Gorge in Africa, but this never happened. Fold It! And GalaxyZoo were developed concurrently. Distributed thinking still hasn’t really caught on.

I also got interested in citizen science as a venue for teaching and training. We want to teach volunteers about science, and train them to do scientific tasks. But the population is so diverse in terms of age, background, ability, and so on. We’d like to have tools for systematically figuring out how to develop effective lessons.

So I developed a middleware system called Bolt that lets you do this, in the same framework as BOINC and Bossa. I wrote an NSF proposal, together with David Baker, to test this approach in the context of Rosetta@home and Fold It!, but we realized it didn’t have a chance, and didn’t submit it. Bolt never got used.

Berkeley@home

Back to BOINC. By this point it was becoming clear that the model in which individual research groups would create BOINC projects wasn’t working as I had hoped. It seemed like the way forward was “umbrella” projects like WCG, which serve a large and dynamic set of scientists. It seemed like an ideal organizational level for such a project would be a university. A university could operate a BOINC project in the same way that they operate a campus-wide cluster. They could promote it to their students and alumni as well as to the public. For example, UC Berkeley has 30K students and 500K alumni, and ways of reaching them such as the alumni magazine.

I proposed this idea to the UC Berkeley administration, and the idea was well received. I wrote an NSF proposal, which as co-PI’d by the Chancellor of Research and the Director of IT, to develop a campus-level umbrella project. To my astonishment, it was turned down. One of the 4 reviewers seemed to have an extreme and irrational prejudice against volunteer computing.

Client emulator

We had some “power users” who stress-tested the client. They attached to lots of projects and tweaked preferences. This would often expose problems in the client’s scheduling policies where cores would be idle or jobs wouldn’t finish on time. It was very hard for me to remotely diagnose these problems. So I developed a “client emulator” that let these users upload their state files and config files, and program on the BOINC server would simulate what their client would do. I could then take a close look and see what was going on. It’s called an “emulator” because it uses the scheduling code from the actual client.

2008 AQUA@home (D-Wave systems): Kamran Karimi; multicore app NQueens@Home (Chile) GPUGrid.net (Barcelona Biomedical Research Park). Gianni di Fabriitis helped us debug the new

GPU-related features.

Page 15: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Orbit@home (Planetary Science Institute; asteroids) Quake Catcher Network

Jan: PetaFLOPS barrier broken

We added GPU support:

client (NVIDIA, ATI): detect, schedule. RPC: separate work requests scheduler

Plan class mechanism

Adaptive replication

Daniel Gonzalez, who had worked at CERN on LHC@home, visited me at Berkeley in the summer. He developed Jarifa, an account manager with a different model: volunteers could vote on what projects to support. So the more active volunteers could find new projects, and the computers of passive volunteers would do work for these projects.

Daniel worked on many aspects of BOINC over the years. Most recently he helped set up Test4Theory at CERN, which pioneered the use of VM apps, and he developed an early VirtualBox wrapper.

2009

NFS@home (Cal state fullerton) VTU@home (Vilnius Tech, Lithuania) Cosmology@Home (U. of Illinois) Virtual Prairie (U of Houston)

April: workshop in Taipei (Academia Sinica); org. by Francois Grey

Aug: Progress Thru Processors launches. This is an effort, conceived and funded by Intel, to use Facebook as an interface to BOINC, in hopes that it will “go viral”.

BoincTasks (power GUI); Fred Melgert: has driven various GUI RPC improvements

Set up Pootle; create web-based system for translating

BOINC web site text Manager GUI text Generic project web text Project-specific text (only 2 projects using this?)

Page 16: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Contacted by Oded Nov from NYU; do a series of studies of volunteer motivation, based on a survey correlated with behavior (credit, longevity, message-board activity). The conclusions are similar to earlier poll. In 2013 WCG did their own survey, which had similar results.

2010

eOn (U. Texas) CAS@home (CAS). The driving force was Francois Grey. It was conceived as an umbrella project

that would host apps from various Chinese scientists. The many technical person was Wenjing Wu. CAS@Home was the driver for various remote job submission and multi-user project features. Wenjing and I designed and developed these features during visits to Beijing in 2011 and 2012. Since then, Wenjing has done a lot of work for CERN and developed most of ATLAS@home.

QuantumFIRE (Cambridge)

May: AQUA@home release first OpenCL app

Trilce Estrada develops project emulator.

Sony bundles BOINC/WCG on VAIO computers

BOINC client and server available as Debian packages. These have been developed and maintained by Gianfranco Costamagna, who has also contributed to the Unix build system, and to using automated code checkers to find security vulnerabilities.

August: communication with Michael McLennan, from nanoHub and HUBzero. Discuss adding BOINC back end to nanoHub. Worked with Ben Haley to develop special wrapper for their tools.

April: Einstein@home pulsar discovery

Development

Notices in GUI

Estimating job runtimes with reasonable accuracy is difficult and important. Projects supply estimates of the number of FLOPs a job will use, and we divide this by the CPU speed. But this may produce systematically wrong estimates, so the introduced the idea of a per-project client-side correction factor.

This wasn’t adequate for projects with multiple apps, and GPUs broke it completely. So we changed things so that the server keeps track of the runtime statistics of each app version on each host.

It turned out that having this data also made it possible to assign credit in a way that had the properties we wanted, such as a given job getting the same credit whether it’s done by CPU or GPU.

Page 17: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

The new credit system proved wildly unpopular with power users, perhaps because it’s based on statistics that can take a while to converge. And it certainly may still have some bugs in it. But after investing a certain number of hours in it I had to move on to other things.

2011

DistRTgen Surveill@Home Mersenne@home Campus grid at Univ. of Westminster Test4Theory (CERN)

Junkets

There were two “Citizen Science” trips organized by Francois Grey where several of us (me, Daniel Gonzalez, Ben Segal) gave talks about volunteer computing and distributed thinking, and tried to get scientists interested in starting projects. In March we did this in Taipei and Beijing. There was a small but enthusiastic BOINC group in China called Equn. However, we came away with the impression that Chinese computer owners were extremely security-conscious and that few of them would volunteer.

In May we made a similar trip to Brazil, and gave talks in Brasilia, Rio, and Sao Paolo. We had a good time but not much came out of either trip.

Virtualbox

It was clear that dealing with multiple platforms was a bottleneck for BOINC. We’d been talking about VM apps for a long time, and now we got serious

We worked a lot on the Vbox wrapper and other stuff related to virtual machines, such as the ability for the client to handle GB-size files without getting bogged down, and to download compressed files in an interruptible way.

2012

SAT@home (Russian Acad of Sciences) FightMalaria@home (U. College Dublin) Oproject@Home Volpex (U Houston): try to run MPI-type programs on BOINC

Android

Starting in 2007 I communicated with Jeff Eastlack, who worked at a company called Freescale that makes cell-phone ARM systems-on-a-chip. He thought that they could be used for volunteer computing, and might have better FLOPS-per-watt than CPUs. He’d occasionally cross-compile and benchmark

Page 18: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

SETI@home on their latest chip. But by 2011 the FP vector units were getting pretty fast. People were talking about the demise of the desktop computer; everything will be mobile. It was clear that we needed to get BOINC running on mobile devices.

In 2010 Pavel Michalec developed and release a GUI that ran on Android and could be used to manage BOINC clients on desktop computers.

Mateusz Szpakowski compiled the BOINC client and the SETI@home app for Android/ARM, and packaged it together with Michalec’s GUI. But he wasn’t interested in working with us.

I had been turned down twice by the Google Summer of Code program, but this time they accepted my proposal to develop a BOINC Android client, and I hired Joachim Fritszch, who had a complete prototype working by the end of the summer.

We wanted to use the standard BOINC client, and just write a new GUI. This turned out to be doable; we had to make a few tweaks to the client.

I told the BOINC projects about this. Einstein@home and WCG were very eager to deploy Android/ARM app versions, and Eric Korpela did it for SETI@home also.

Condor collaboration

I was desperate to get BOINC used by more scientists. I contacted Miron Livny, who runs the Condor project, which makes software for Grid computing and who had created Open Science Grid. I pitched the idea of creating a BOINC project to provide additional computing power to OSG, and developing an “adapter” so that jobs could be transparently migrated from Condor to BOINC. He likes this idea, and we got some funding from NSF to pursue it.

We worked on this through 2013 and into 2014, and got it all working. I did a lot of work on BOINC’s mechanisms for remote job submission and for remotely managing input and output files. The Condor guys were obsessed with performance and scalability, and we reached a point where we could submit several thousand jobs per second.

However, when it came time to actually use this for something, Miron’s interest seemed to have vanished, and he stopped answering my emails. The project is dead as of now.

Git

Rom and Oliver Bock succeeded in moving our source code to Git. We didn’t move to Github because we would have had to pay.

2013

Asteroids@home (Charles Univ, Prague) Subset@home (U. North Dakota)

Page 19: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

RNA World (Rechenkraft.net)

Einstein@home hires Joachim to work on Android. We make some discoveries about batteries – like, they can overhead and die – and change things accordingly.

July: BOINC/Android launched. Publicity fizzles.

Nov: Installer includes VBox

Reimplement scheduler (score-based; for job size matching)

2014

Convector (Czech Tech Univ) Atlas@home Bitcoin Utopia: forces me to generalize credit system, start thinking about generic coprocessor

support. GridOctane (India; BOINC-based desktop grids for bio; Jenkhar Dixit)

Power to Give and Power Sleep

HTC Power to Give: HTC’s marketing company had approached us in 2012 about the idea of HTC developing a branded BOINC client and pre-installing it on their phones. This lay dormant for a while but it resurfaced late in 2013, after a meeting with Cher Wang, and moved ahead quickly.

At HTC’s request I made a couple of security changes:

GUI RPCs go over Unix-domain sockets, not TCP Network RPCs containing passwords use SSL

I also hurried along some projects, such as SETI@Home, to release Android apps; we went from 3 to 8.

Power to Give is a work in progress. It’s not yet being pushed to phone owners, so participation is in the 10Ks rather than the millions.

Much to HTC’s distress, Samsung Power Sleep was released around the same time. It has a radically simpler model – not registration, no project choice, alarm clock interface. I’m very enthusiastic about this project as well.

Reflections on software

What we did right

Good factorization and good interfaces; extreme backward compatibilityo client/schedulero app/client

Page 20: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

o account manager / project servero GUI/cliento project/statistics sites

server architectureo DB-centered, multiple daemonso scalable in various ways

good ideas: account manager architecture, anonymous platform, plan class mechanismo lots of generality

over years, software accretes like sediment on the ocean floor. After a while it can “fossilize”, that is, become so big and poorly structured that you can’t replace, change, or fix it any more. We’ve managed to avoid fossilization in BOINC.

Things we need to change eventually

Coprocessor model: doesn't handle multiple hetero GPUs of same vendor; need for kludge work-arounds; coproc types hard-wired in code

Prefs, general and project-specific

Things we should have done differently

Decentralized model

The only centralized parts of BOINC are:

Its web site, where you download the client A web RPC that returns a list of projects, and info about them

It would make things a lot simpler if there were a central notion of user account and team.

Volunteer interface complexity

The BOINC client and web interfaces were designed for power users. Its original GUI was complex and intimidating; when we switched SETI@home to BOINC, there was tremendous backlash among volunteers and we lost many of them - perhaps half.

We could have taken the opposite approach: to participate in BOINC, you install a program. That's it - no email address or password, need to decide what science you want to support. These things could be available for those who want them.

Server complexity

Similarly, on the server side BOINC was designed to be powerful and general. I wanted BOINC to be able to do anything that anyone could ever ask. We've succeeded in this, but at a cost: BOINC is complex. You have to learn huge amounts of stuff just to process a job.

Page 21: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

We could have taken an approach like Condor: to submit a job you put your executable and input files in a directory, run a command, and some time later the output files appear.

Reflections on project management

BOINC has limited resources. I’ve managed to get NSF grants to support me, Rom, and Charlie. I’ve asked NSF for more money, enough to hire someone else, and been turned down. If I were Miron Livny I could find other sources of money; but I’m not.

The three of us have our strengths and weaknesses. I hate to manage. Rom is great at most things, but he hates to write, and he avoids it at all costs. Charlie is good with details, but loses track of the big picture.

With these resources we’re trying to do something extremely ambitious. I think we’ve done a good job of balancing our resource allocation among various tasks:

Develop features needed by specific projects Develop features that we think might get used someday Testing, release management, documentation; making a solid product. Mobilize and use volunteers, like the Alpha testers and the translators.

We’ve certainly dropped lots of balls, and pissed off some people. But we’ve got a tremendous amount done.

Some people think I run BOINC in a dictatorial way that drives off potential volunteer programmers. That may be true to some extent. But the contributions of volunteer programmers are often a mixed blessing. It may take a lot of time to figure out how they’re doing things, and changing them to match your style. Or maybe you just check it in, and pay price later on when you need to fix something. So actually I’m pretty happy with the code contributions we’ve got.

Some people complain about our lack of release management of the server code. Basically, trunk is the recommended version. If you find a bug, you tell me and we fix it. There are reasons for doing things this way, which boil down to lack of resources for testing. We’re in the process of setting up an automated testing system, which will help.

The part of BOINC I’m least happy with is the documentation. Every time I look at it I’m horrified, and I spend some time patching it up. But it needs a complete rewrite

Why the vision didn’t happen

That brings us up to date. Now let's return to my original vision for BOINC, and why it didn't happen.

Lack of projects

Page 22: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

First, instead of thousands of projects, there are a couple of dozen. Only a few of these are American academic science projects. Instead of a new project every day, there is maybe one per year. Why is this?

It's hard and expensive to create a BOINC project. It's a time sink, and scientists are busy. Most scientists don't know about volunteer computing. We never found a good way to publicize

it among scientists. IT groups are inherently hostile toward volunteer computing, because they view it as a threat to

their budget and power.

Lack of publicity and outreach

What projects there are have not, as a rule, devoted a lot of resources to outreach. They don't get much media coverage. Their web sites are stagnant.

Scientific American, Popular Science, and Wired have never run articles about volunteer computing.

Part of the problem is too many “brands”: SETI@home, GridRepublic, PTP, Power to Give, etc.

Declining volunteer population

So the volunteer pool has shrunk. SETI@home had roughly a million volunteers; this has gradually declined to about 300K over all projects. The volunteer population is old, male, and technical.

Disinterest from Computer Science

Academic computer science has ignored volunteer computing. Volunteer computing is a gold mine of interesting research problems. Everything about volunteer computing is hard:

how to debug apps running on a million diverse computers how to estimate job runtimes how to do reliable computing on untrusted computers how to grant credit in a way that's fair and cheat-proof how to make a servers that can handle millions of clients

We've designed solutions to all these problems, but we haven't researched any of them; there are dozens of PhD theses there.

There are hundreds of conferences on cloud computing, zero on volunteer computing. No conferences on distributed computing include volunteer computing from their list of topics.

Two research groups have worked on volunteer computing: Derrick Kondo and Arnaud Legrand at INRIA, and Michela Taufer at U. of Delaware.

I've heard that people have been told: "you're not going to get tenure working on volunteer computing".

Page 23: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

I maintain a list of what I consider to be interesting research projects on the BOINC web site. For a while I send email to people in the UC Berkeley CS Dept every fall, suggesting these as topics for grad students, but never got a response.

Disinterest from funding agencies

While NSF has continued to fund BOINC at fairly low levels, neither they nor other US funding agencies have encouraged volunteer computing in any other way. They haven't encouraged the science projects they fund to use volunteer computing.

They haven't used their PR resources to promote volunteer computing, even though it benefits them.

In 2010 NSF issued a planning document about scientific computing in the next decade. It talked about supercomputers, clouds, and clusters, but made no mention of volunteer computing.

Why is this?

In the part of NSF that supports scientific computing, almost everyone used to work at a supercomputer center. Supercomputer people have a natural antagonism to volunteer computing. They like the idea that scientific programs can only be run on supercomputers or clusters.

I've heard remarks like "do you expect real scientists to use something that was used in SETI?". Proposal reviews have contained comments like "In 17 years, volunteer computing has yet to make a single discovery".

Interrelation of factors

All these factors are interrelated. If there were more volunteers, there might be more projects, and they'd do more outreach, and conversely.

If there were more projects, funding agencies might be more supportive, and conversely.

If funding agencies were supportive, a computer science bandwagon might arise.

And so on.

My failures as leader

Because of SETI@home, I was thrust into the position of leader and spokesperson for volunteer computing. I’m not well-suited to this role, and I’ve mostly done a bad job of it. The reasons include:

The wild success of SETI@home made me over-confident. I felt like I had a Midas touch, and whatever I did would be a success, and the world would beat a path to my doorstep.

I thought I knew who my audience was, both volunteers and scientists, but I was wrong. I don’t have the attributes of a leader. I don’t like or schmoozing, or managing people, or

evangelizing. I don’t have the charisma of Steve Jobs. My personality grates on some people.

Page 24: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Personal negatives and positives

So my experience with BOINC has been, in some ways, a rather bitter disappointment. I often feel like Don Quixote tilting at windmills. I've often thought about giving up. If Bossa and Bolt had turned into a viable career direction I probably would have done that, and abandoned BOINC.

On the other hand:

BOINC makes a difference in the world working on BOINC is still fun for me.

o I have the freedom work on what I wanto It's given me a chance to work on hard problems with lots of energetic, smart, and

idealistic people, many of whom are in this room. These interactions, many of which have developed into friendships, have been by far the most pleasurable part of working on BOINC.

What now?

I still stubbornly think that volunteer computing is a great idea. Even though my original grand vision hasn't worked, maybe there's some way to achieve its goals of kick-starting computational science and enlightening the public.

Joining the HTC mainstream

First, if we’re going to help large numbers of scientists, we need to abandon the idea that they’re going to learn about BOINC, create projects, and write news articles for the public.

Computational scientists who need HTC use supercomputer centers and national grids, and to a less extent portals like nanoHUB. These “mainstream HTC providers” port their apps for them, and provide them with tools for submitting jobs and workflows, and visualizing results. They need these things to do their work. BOINC doesn’t provide them. So instead we need to integrate BOINC with these mainstream HTC providers.

Our experience with Condor showed us that it’s easy to build “adapters” between existing systems and BOINC; 3GBridge is another example of this.

There are issues about how to port applications. One approach is to pick a few widely-used applications like Gromacs and run them as regular BOINC apps, or under the wrapper. The long-term solution is to use VirtualBox, and create a “universal” application, where the executable is part of the job.

We’ve submitted a proposal to NSF to create BOINC back ends for nanoHUB and TACC. If this succeeds, it has the potential to create a number of new umbrella projects, like TACC@home.

The volunteer interface

Page 25: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

Our current approach – where volunteers choose from a long list of projects – is too complex, and doesn’t match the fact that most volunteers want to support scientific goals – like curing cancer – not specific projects.

The way things should work is that when you sign up for BOINC, you say what goals you want to support – for example, biomedical, astrophysical, or environmental. That’s it. If you want to go deeper, you can indicate other preferences, like supporting research from a particular continent, country, or university. Then, from among the existing projects, your computer runs the jobs that most closely match your interests.

It happens that the account manager architecture provides an efficient and fairly easy way to implement this. Imagine there’s a web site, and an account manager, called Science@home. The BOINC client is configured to attach to this by default. Science@home provides the interface for setting science prefs.

Science@home maintains a list of science projects, and apps within each project. For each of these, it has attributes such as the area and subarea of science, and the nationality and institution of the scientist. In its function as account manager, it attaches each client to projects, and apps within projects, that match the prefs of its owner.

The algorithm for doing this embodies an allocation policy: if there are 2 projects both doing cancer research, how many hosts should be attached to each one? We could be idealistic and make this policy democratic in some way, letting volunteers vote on it. Or we could let funding agencies like NSF determine the policy. Maybe they’d form a committee of experts, or maybe they’d make projects pay to get computing.

I favor the latter, for pragmatic reasons. Unless we get funding agencies, and the mainstream HTC community which they represent, directly involved in volunteer computing, it’s going to continue to languish on the fringes of the HTC world. They may do things in a political and unfair way, but I think that’s an inevitable consequence of becoming mainstream.

The hope is that this will help with publicity also. Science@home will become the primary “brand” for volunteer computing, and maybe funding agencies will devote resources to promoting it.

Source of computing power

The space of consumer computing projects is blurring into a continuum. At one end are desktops, always plugged in, one or more GPUs, maybe several TeraFLOPS. At the other are smartphones, plugged in only sometimes, optimized for low power consumption, currently a few GigaFLOPS. In the middle are laptops and tablets of various types; there are also game consoles, and appliances like set-top boxes.

BOINC needs to support as many of these as possible. Currently we have everything covered except for

Game consoles Windows and Apple phones

Page 26: boinc.berkeley.eduboinc.berkeley.edu/talks/workshop_14.docx · Web viewMy background was mostly in Math and theoretical CS, but I was interested in system software, and especially

There are some concentrations of computer power that we need to focus on:

Underrepresented groups Bitcoin mining GPUs PC game machines (Steam) Android (future)