Questions and Answers : Wish list : again a call for smaller workunits!
Message board moderation
Author | Message |
---|---|
Send message Joined: 18 Nov 04 Posts: 2 Credit: 235,374 RAC: 0 |
Hello there! Let me (once again, I know that other people did that before) express the wish for smaller workunits. The problem is, I see the climate predictions as one of the most important and usefull (if not THE most important and usefull) things for mankind in general and as the best BOINC project as well (Beinghonest, SETI and that stuff is fun but not really worth doing). climate prediction is ABSOLUTELY neccesary and I would really like to be part of it and help solving that stuff. Now the problem is that all my PCs are not running the whole day but only part, and that e.g. on my laptop (Windows XP, intel pentium M 1.5Ghz) things do not even run on full speed (overheating, the fan is really loud then) but on only 50%, so your current models would run something like over one year. And that would still mean I would have to do CPDN ONLY, I would not be able to do other BOINC projects as well, which I really would like to in 50% of my time (which would give 50% to CPDN still!!). Being over the deadline for that long I would (according to Boinc) not even be sure to get credit for that. Worse than that, BOINC sees by itself that CPDN is going to be late for the deadline and cancels all other projects so that it by itself runs only CPDN (if this is what you want to enforce, that\'s ot really fair to the other projects). Additionally your project bombs sometimes so the whole CPU time was wasted! Having HUGE wu\'s makes this worse, loosing a small one is not that much of a big problem. I think a lot of people have the same problem. Producing smaller wu\'s on your side would get many more people involved (I cancelled e.g. for the explained reasons) and then having much more people involved would speed up your project a lot. Maybe you should really make BOTH sets (small and big) and people can choose which ones they do want. Your arguments against don\'t really count for me: - I would not care about traffic, I\'ve always got a fast internet connection around and I would prefer to transfer a few MB a few times a day than cancelling all other projects or wasting CPU time. - I do not want to use this advanced visualization stuff. Let the people who use it take the big chunks of work, but also let the people who don\'t use it get smaller units for fast and usefull work. I DO see the problem that climate models are very complicated and big (I am still a physisist and know how these things work), but exactly THAT is the reason why you should try to get as many people involved as possible. I think many people are put off by the big wu\'s. You could have around 50% of at least MY CPU time by offering smaller units. Now, at the end, I would like to ask people to answer to this question in this forum so that one can see HOW many people are concerned by that, I think it should be very many. If I am the only one: Fine, that little CPU time lost should be fine, but I just don\'t belive that I AM the only one concerned about that. Hoping that something changes, that you start offering smaller units and prevent your models from bombing, goodbye and Greetings from frosty Heidelberg, Boris Häußler |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
The short answer is: it\'s not going to happen. It\'s been discussed many times in the past on the Message Board, including the reasons. Currently there are 45,137 mchines running the project. We\'ll have to make do with these. |
Send message Joined: 17 Aug 04 Posts: 753 Credit: 9,804,700 RAC: 0 |
There is a climateprediction experiment being run currently which has smaller workunits - the Seasonal Attribution Project. But that requires a fast computer and a lot of memory. As Les hints, there are few easy ways of doing climateresearch because of the complexity of the mechanisms involved. |
Send message Joined: 18 Nov 04 Posts: 2 Credit: 235,374 RAC: 0 |
The short answer is: it\'s not going to happen. Well, than that means I will not be a contributor to your project anymore, sorry. And I do know of many other people that feel like this. Your problem now if you don\'t WANT help I cannot force you to. Boris |
Send message Joined: 5 Aug 04 Posts: 426 Credit: 2,426,069 RAC: 0 |
The short answer is: it\'s not going to happen. It\'s not that the team does not want to make shorter workunits, it is that it is impractical for them to do so. This may change in the future, if it does I am certain that the development team here will want to do that. BOINC WIKI BOINCing since 2002/12/8 |
Send message Joined: 22 May 05 Posts: 2 Credit: 1,092 RAC: 0 |
The short answer is: it\'s not going to happen. It may be a stupid question but is there any need for the models starting in 1920 |
Send message Joined: 7 Aug 04 Posts: 2187 Credit: 64,822,615 RAC: 5,275 |
It may be a stupid question but is there any need for the models starting in 1920 It\'s part of the experimental strategy. Those models that do well at predicting past/known climate will have more confidence placed in them for the future climate they predict. Basically this coupled model that is now being run is combined Experiments 2 and 3 described on this page. |
Send message Joined: 22 May 05 Posts: 2 Credit: 1,092 RAC: 0 |
It may be a stupid question but is there any need for the models starting in 1920 but surely they could be started more recently? also couldnt the work units be made smaller by only running one prediciton e.g. sea temperature, and by allocating the predictions which require more processing to the computers which have faster proccessors? At the rate my computer is processing the work unit by the time the deadline comes the work unit will not be into the new milleium |
Send message Joined: 5 Aug 04 Posts: 1496 Credit: 95,522,203 RAC: 0 |
Smaller W/U is a perennial complaint. On this and the other boards. For reasons of data/science integrity, not to mention horrendous amounts of data transfer, it can\'t be done. Why 80 years each of \'hindcast\' and \'forecast\'? Not \"surely they could be started more recently\" at all. Search on IPCC on the other Board (middle link on \'Message Boards\' at left). It\'s a matter of conforming to International standards. By the way, each of these Coupled Models has the experience of one of a set of 200-year \'Spinup\' Runs to start... "We have met the enemy and he is us." -- Pogo Greetings from coastal Washington state, the scenic US Pacific Northwest. |
Send message Joined: 22 May 06 Posts: 2 Credit: 0 RAC: 0 |
The short answer is: it\'s not going to happen. well you can delete two more from me as it is to hard on both new and old computers to run one WU for 4 to 8 weeks straight like that and that is running 24/7 on that one projectat 100% but as I run Bonic at 50% it is more like 4 to 6 months for one WU,other prjects have found out that people are willing to spend a few hours on 1 WU even a day and a half on it but they will delete longer then that. Smaller WU and I might come back. |
Send message Joined: 22 May 06 Posts: 2 Credit: 0 RAC: 0 |
The short answer is: it\'s not going to happen. and as of now I just checked and saw that you only have 26,639 computers running this all three projects as of today that should tell you some thing. |
Send message Joined: 5 Aug 04 Posts: 1496 Credit: 95,522,203 RAC: 0 |
... it is to hard on both new and old computers to run one WU for 4 to 8 weeks straight like that and that is running 24/7 on that one projectat 100%... Smaller WU and I might come back. It doesn\'t hurt properly maintained machines. I\'ve run as many as eight machines simultaneously, flat-out, over the last ~3 years, machines build solely for CPDN. Recently, one finally gave-up the ghost: a capacitor failed on the MB. One can\'t blame that on CPDN because it would have failed some day anyway. Others have had better luck with their machines than I have. As to shorter runs, check back from time to time... "We have met the enemy and he is us." -- Pogo Greetings from coastal Washington state, the scenic US Pacific Northwest. |
Send message Joined: 13 Jan 06 Posts: 1498 Credit: 15,613,038 RAC: 0 |
There are two general observations that I can make... Another way of viewing the 160 year coupled model is as if it was 16 10-year models. As long as you run the model long enough to reach the upload at the end of every 10 years, that\'s useful work from the project\'s viewpoint, since the scientists can get a lot of data from that 10 year upload. Ideally, of course, the more the better... Secondly, there\'s a single-year model available, although this model is high-resolution hence requires a machine with a gig of ram. The short model takes 12 CPU days to run on my PC (versus 3 months for the coupled model). http://seasonal.cpdn.org I'm a volunteer and my views are my own. News and Announcements and FAQ |
Send message Joined: 5 Feb 05 Posts: 465 Credit: 1,914,189 RAC: 0 |
From what I understand seasonal is winding down, and will probably not be around much longer (read this somewhere). As for splitting them into 10 years, I believe things would take a lot longer, because so many would grab the first part when they start that up, then the next 10 would have to wait until the analyst of the previous 10 before they can be sent out, etc. The model is dependent on itself to have all the information there. It could take years to get a 160 year to finish, if people never get past the 2nd or 3rd part, because they decide they are not dedicated to continue. Yes, we lose models now because of that now, but I believe more people are apt to try and finish a full model before jumping ship. I think it also would put a lot more strain on the already overworked servers, and developers. They would need to run interrum processes to couple the correct models and that the information is clean to be allowed to move forward, or force it to do that work over. It\'s a nightmare even to think about how to program that. |
Send message Joined: 5 Aug 04 Posts: 907 Credit: 299,864 RAC: 0 |
I am testing a shorter CPDN workunit, basically a 40-year HadCM3L job (so 1/4th the latest CPDN/BBC experiment). It could also be sent out as 20 or even 10 years. The trick is to send the start dumps every 10/20/40 years (they are sent every 40 now so we should at least be able to start people at 1960, 2000, 2040). The trick would be \"chaining\" the start dumps and reconfiguring them to be sent out again, but it is something we are looking at doing. |
Send message Joined: 13 Jan 06 Posts: 1498 Credit: 15,613,038 RAC: 0 |
Pooh Bear 27: I think the original intention was for the seasonal project to be quite brief, but the target was 10,000 models completed, and it\'s not even a tenth of the way there yet. My guess is there\'s probably 6 months to a year of life left in it at least. Once Seasonal is done, then watch out for possible \'regional\' models (ultra-high resolution for a part of the globe, and normal resolution elsewhere), Pardeep has mentioned this as a possibility for the future. Carl: That should be useful, I\'ll make a note of the current restart dump interval, should improve the number of completed models. I\'d vote for a shorter period between restart dump uploads for future models, 10 years rather than 40 years, providing of course that the servers could cope! :-) I'm a volunteer and my views are my own. News and Announcements and FAQ |
Send message Joined: 29 Nov 05 Posts: 5 Credit: 8,046 RAC: 0 |
They would need to run interrum processes to couple the correct models and that the information is clean to be allowed to move forward, or force it to do that work over. Thank you for explaining this out a little bit. Makes sense to me (non-scientist) and hopefully people will stick with it for the long term because that is usually what science needs to work long division. (^: Jonathan |
©2024 cpdn.org