text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Soft Typing. MFTL is softly typed. This means that (1) everything, including primitive types, is a subclass of Object, (2) type declarations all default to Object and (3) downcasting is automatic. For example, the following program is legal: def fact_untyped (i) if (i <= 1) 1 else i * fact_untyped(i-1) end end But if the programmer types the following, they get much better performance and compile-time warnings about type violations: def fact_untyped (i: int): int if (i <= 1) 1 else i * fact_untyped(i-1) end endEverything's An Expression. In the above example, 'if' is an expression, like the ternary operator in C. So you could write: var x = if (y) 1 else 2 end Newlines End Statements. Just like in sh, bash, JavaScript, and Ruby, newlines in MFTL are statement terminators. (This is slightly funky, but it allows 'if' to be an expression without massively cruftifying the semicolon-placement rules.) Keyword-Based Initialization. Classes generally don't need constructors: class Sample () var x, key: x var y = 10 end s = new(Sample, x: 5) Generic Functions. You can do the equivalent of operator-overloading at run time, not just compile time. abstract class Thing () end class Rock (Thing) end class Paper (Thing) end class Scissors (Thing) end def defeats? (a: Thing, b: Thing) false end def defeats? (a: Paper, b: Rock) true end def defeats? (a: Rock, b: Scissors) true end def defeats? (a: Scissors, b: Paper) true end defeats?(new(Paper), new(Scissors)) // false defeats?(new(Rock), new(Scissors)) // true Yes, we know how to make this go fast. Getters and Setters. No more endless 'getFoo' and 'setFoo' functions! Just write: public class Example () public var foo, key: foo end If you later decide that you need a getter and setter function, just write: public class Example () var real_foo, key: foo public def get foo () real_foo end public def set foo (value) real_foo = value end endThe users of the class will never know the difference. Other Stuff. Things I want, but which I know will require extra work to do right: inlineable iterators, design by contract, integrated unit tests, simple templates. Performance. With full type declarations, it should be possile to compile MFTL to run at speeds approaching that of C. Softly-typed languages are a solved problem. Implementation Status. The parser and the VM are about 50% complete. The compiler was about 2% complete before I threw it out and started over. :-( If I had the luxury of working on this project full-time, I could ship a demo interpreter in about three months, and an MFTL-to-C compiler not long after. Feedback. Comments, suggestions to eric.kidd@pobox.com.
http://www.advogato.org/person/emk/diary.html?start=19
CC-MAIN-2015-11
refinedweb
445
63.49
First time here? Check out the FAQ! RegionOne it gives I am using Mitaka version of opestack and I have configured Magnum on it. After checking the logs magnum conductor reports that the service is unable to fetch state as authorization failed:- log_http_response /usr/lib/python2.7/dist-packages/heatclient/common/http.py:142 2016-12-29 01:43:55.289 24881 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync_bay_status run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215 2016-12-29 01:43:55.290 24881 DEBUG magnum.service.periodic [req-695a33ec-39f8-4685-bc85-d9ffd4f3809a - - - - -] Starting to sync up bay status sync_bay_status /usr/lib/python2.7/dist-packages/magnum/service/periodic.py:71 2016-12-29 01:43:55.298 24881 DEBUG keystoneclient.v3.client [req-695a33ec-39f8-4685-bc85-d9ffd4f3809a - - - - -] Authorization failed. get_raw_token_from_identity_service /usr/lib/python2.7/dist-packages/keystoneclient/v3/client.py:303 2016-12-29 01:43:55.298 24881 WARNING magnum.service.periodic [req-695a33ec-39f8-4685-bc85-d9ffd4f3809a - - - - -] Ignore error [reason: A user and password or token is required.] when syncing up bay status. 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic Traceback (most recent call last): 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic File "/usr/lib/python2.7/dist-packages/magnum/service/periodic.py", line 82, in sync_bay_status 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic stacks = osc.heat().stacks.list(global_tenant=True, 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic File "/usr/lib/python2.7/dist-packages/magnum/common/exception.py", line 173, in wrapped 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic client=func.__name__, message="reason: %s" % sys.exc_info()[1]) 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic AuthorizationFailure: reason: A user and password or token is required. 2016-12-29 01:43:55.298 24881 ERROR magnum.service.periodic 2016-12-29 01:43:55.388 24881 DEBUG heatclient.common.http [-] curl -g -i -X GET -H 'X-Auth-Token: {SHA1}9294feeaee720d16a772e0843e796fd74a101421' -H 'Content-Type: application/json' -H 'X-Auth-Url:' -H 'Accept: application/json' -H 'User-Agent: python-heatclient' log_curl_request /usr/lib/python2.7/dist-packages/heatclient/common/http.py:129 2016-12-29 01:43:55.440 24881 DEBUG heatclient.common.http [-] HTTP/1.1 302 Found Location: Content-Length: 211 ----------------------------- Any clue ? here is the magnum.conf file [DEFAULT] transport_url = rabbit://openstack:password@controller debug=true verbose=true periodic_global_stack_list= True pybasedir = /usr/lib/python2.7/dist-packages/magnum bindir = /usr/bin state_path = /var/lib/magnum periodic_enable = true run_external_periodic_tasks = true [api] host = 10.10.10.10 [certificates] cert_manager_type = local [cinder_client] region_name = RegionOne [database] connection = mysql+pymysql://magnum:password@controller/magnum [keystone_authtoken] auth_uri = auth_url = memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = magnum password = password [oslo_concurrency] lock_path = /var/lock/magnum [oslo_messaging_notifications] driver = messaging [oslo_messaging_rabbit] rabbit_host = localhost rabbit_userid = guest rabbit_password = guest [trust] trustee_domain_name=magnum trustee_domain_id=8b673a7e21f74588b8edb28541bb0877 trustee_domain_admin_name=magnum_domain_admin trustee_domain_admin_id=c95849d2d09a4b718f7bb5d2aee77dd5 trustee_domain_admin_password=password Thanks a lot... U saved me from debugging it.... Worked like a charm I am not taking about openstack-ansible project where opestack is deployed in lxc-containers. I want to deploy multi-node openstack Mitaka non-container plain version using ansible playbooks. Unfortunately the playbooks that are available to deploy simple openstack on the internet are till havana release. Does by chance anyone has that ? Too much to see :P ... Can we setup a call where we can debug this further.... It will be great if we can have a team-viewer session... My email id is prateek.khushalani@gmail.com ... Based on what I read in the post I see that the interface on the VM is not getting the IP from dhcp server running for your internal network.. Do s ps -aef |grep dnsmasq on the neutron node and check weather a dhcp server is running for the internal network or not. If the dhcp server is running check the flows in both compute and neutorn node of OVS via command ovs-ofctl dump-flows <bridge> .... If the flows are also fine then do a tcpdump on the interfaces and see weather neutorn node is actually receving the DHCP broadcast message from teh VM or not. we need to debug this step by step... I guess the VIP that you have configured is on HAProxy... Enable the debugging logs of HA proxy and see what it has to say when the "Unable to establish connection happens" 2 hypervisors cannot work on a same physical machine. Openstack can support different compute nodes having different hypevisors. So you can have KVM/QEMU on compute node 1 and ESXI(Vmware) hypervisor on compute node2. Vcenter is a manager which connects to esxi hypervisor but its not a hypervisor in itself we can close this question now as the problem got solved :) See this image... ... Here the data Network(blue line) and Management Network(Red Line) are separate, in your case it would be same ... Configuration wise it depends upon which net topology you are using ..Can you please tell me that Great ... You were able to solve the problem ... :) What are the specs being provided while creating the flavor ?.. Also while launching the instances there would be logs of nova-compute coming can you please provide that.. Make sure debug is set to true in the config file This is not a default design kind of thing. Can you tell which hypervisor you are using . What is happening is the disk that is created for the VM is not in good shape as whatever you are writing on it is getting erased. Also provide weather Cinder is also involved in your setup ? if you want to setup openstack then you need the following networks:- Management network - internal API calls, internal communication of services via AMQP server and also Datbase qeuries. Data Network- For traffic between VM's and Network node. Public Netwok- External network which will be used to access VM's from outside (external world) Ideally for each network it is recommend to have separate Interfaces so 3 NIC's are used but as you mentioned you have 2 NIC's there in this case you Data and Management network will be clubbed on the same interface. It is possible and there are no errors encountered but the only thing is its only good for PoC or testing kind of things and not good for Production Setups Windows templates must be prepared with Sysprep before they can be provisioned on multiple machines. Sysprep allows you to create a generic Windows template and avoid any possible SID conflicts. For more info refer to this link.... . Basically you need cloud-init package inside your windows machine plus you need to create a template of your own as described in the link above My mail id is prateek.khushalani@gmail.com ... Please send me the logs there so that I can have a better look at it prateek.khushalani@gmail.com .... Reach me on this email id... Lets have a talk about this Yes that can also be done... In that case you have to go to the lbaas config file and enable HA proxy drivers and disable octavia .. Follow the same openstack lbaasv1 guide to configure it and it should work fine There are two ways to resolve this: 1. Sign out and properly connect / sign in as the user you wish to use.(In your case login with stack itself, dont go with root and then su to stack) 2. (Only use when step 1 does not work) Run script /dev/null to own the shell ; then try screen again. I recently got a chance to work on both lbaas and lbaasv2... So it goes like this lbaas - it uses HAProxy inside. This runs as a process inside the network namespace. This namespace is present in the network node of the openstack setup. When the user hits the lbaas IP it goes to the network namespace where HAproxy is running and then its al HAproxy working with its pools, members and crazy load balancing stuff lbaasv2- Ot uses an image known as Amphora inside. When we talk about image its actually a VM getting created on the compute node. This entire project is known as Octavia. In layman terms a virtual machine is created instead of a network namespace. Inside this VM the LBAAS service is running and will have its pools, members and crazy load balancing stuff Why have we done it :- I think what we you want it an ARP responder which actually works in the following ways:- Check this value in the ml2_conf.ini file of the neutron node:- arp_responder = False One thing that we have to note here is currently only networks such as VXLAN and GRE(Overlay networks) support ARP responder in OVS. There is already a bug raised for getting this feature in vlan :-... What I can suggest you is if you are worried about the network congrestion and dont want such things to happen then convert your network topology to VXLAN or GRE and then the arp responder will work and will prevent flooding Hmm... this is strange , what my wild guess would be to check the configurations files of neutron and see weather the firewall variable has been set correctly in security groups sections:- [securitygroup] enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver Let me know how it goes Can you post some more logs... I am unable to figure out what has changed in the logs Can you provide the httpd service logs as well Are you able to get console for instances which do have Windows ? Can you provide me the process logs of httpd service as well if its a normal installation restart the services of nova via service nova-api restart , service nova-conductor restart etc .. etc is it a devstack installation ? .. If yes then you have to go the screen session ... Then go to the partcular tab where nova services are running( ctrl + ") then press ctlr +c to ternmiate the process and then restart it .. Also restart the nova services for changes to take effect Try to see the nova.conf file and check weather 'auth_strategy=keystone' is present under [default] section... I think this will solve the problem There is no term like limits in the entire openstack installation about the limit to the compute nodes as When it comes to availability in openstack every compute node runs a major service known as nova-compute and neutron-openvswitch-agent. Now these two services connect to controller via RabbitMQ which is an AMQP broker ( Again it has no limits to the number of connections) So practically its like the more resources you have the more load your controller can handle. In my experience I was able to add 15 physical compute nodes and for testing I created a fake kind of mechanism and was able to add around 180 compute nodes to controller. OK... if you are intrested in making fake compute nodes then do the following. There are actually 2 ways of doing it : Method 1- Take a physical machine and install docker on it. Create a lxc container and install nova-compute service on it. Method 2- This method requires prior knowledge of AMQP server and Messaging application which you are using in openstack environment. In 90% of cases is RabbitMQ. Also you need to have basic knowledge of python as you have to write a small script. Method 2 is somewhat complicated but it would consume the least number of resources as there are no services involved. Just a piece of code along with threading. You can use any of the methods listed to create a Fake compute node. The agenda of this experiment is just to test how much compute nodes controller in your environment can handle. you wont be able to launch instances or do the networking. What is the problem you face... ?? .. error wise .. Also paste the output of keystone endpoint-list ... and also create an ascii flow diagram .. We need to point all services of openstack1 to contac keystone of openstack2 OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license.
https://ask.openstack.org/en/users/9028/prateek-k/?sort=recent
CC-MAIN-2019-47
refinedweb
2,069
57.57
If you haven't been fortunate enough to see the Activiti in Action session of Tom Baeyens and Joram Barrez at Devoxx last Thursday, this article will get you up to speed with this first full-blown open source business process management suite. Before we dive in the technical details of developing processes with the Activiti engine we will first give a little bit of background about BPMN 2.0 and Activiti. Finally, after you have seen the API in practice, we present the tools completing the Activiti stack briefly and give you some references to get more background information to check out. The road to BPMN 2.0 and Activiti Already back in 2004 the first version of the Business Process Modeling Notation, BPMN 1.0, was finalized. This language enables analysts to model their business. From a developer's point of view, to run these processes the industry standard was of course WS-BPEL 2.0. Besides the fact that you needed two different languages to get from a process model to an executable process, some important constructs like human task or workflow support and cyclic control flow, where lacking. BPMN 2.0 overcomes these issues and defines one language for modeling a business process and execute it on a process engine, so now analysts and developers speak with the same vocabulary and share models without the need of conversion. While traditional vendors were (and still are!) busy to migrate their BPEL engines and BPMN 1.x modeling tools to BPMN 2.0, Tom Baeyens and Joram Barrez started the Activiti project to create a rock solid BPMN 2.0 process engine. These former leaders of jBPM were planning on implementing the BPMN 2.0 specification on the jBPM engine, when Alfresco was looking for a more liberal licensed process engine for the workflow functionality in their document management system. The two parties met and the Activiti project was born! Now, half a year later, on the first of December, the first GA release Activiti 5.0 is coming out. To quote the guys concerning the odd version number for a first release: 'we are not building experimental stuff here'. Build with the knowledge and experience of developing jBPM versions 1 to 4, Activiti is lightweight and has a very clean and open API. Giving the Activiti engine a spin To try out the example we will show you can get Activiti at. Setting up your environment is pretty straightforward and will be explained later on. Now let’s get into the details of developing business processes with Activiti. Note that we’ll focus on the Activiti engine in this article, but we’ll provide more articles about Activiti where we discuss other parts of the project such as the Activiti Modeler and the Activiti Designer. We’ve talked about the new BPMN 2.0 specification a bit but to really understand the technical basics of this upcoming open standard, we’ll start with a very simple process with only a start and an end event with a script task in between. <?xml version="1.0" encoding="UTF-8"?>This simple example shows the basic structure of a BPMN 2.0 XML process definition with a definitions root element. In the process definition we see that most of the process consists of the script task code. If we had printed a text like "Hello world" the XML would have been pretty compact. However it’s good to have a bit more meat inside this process definition, as we’ll see in a bit. You can see that the start and end event and the script task are connected via sequence flows. The sequence flows define the process flow of this simple process flow, so after the start event the script task is executed and finally the end event. <definitions xmlns="" xmlns:xsi="" xmlns: <process id="pizzaorder" name="Order a pizza"> <startEvent id="theStart"/> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="validateOrder" /> <scriptTask id="validateOrder" scriptFormat="groovy"> <script> out:print "Validating pizza order for " + customerName + "\n"; if(amount > 10){ rejectReason = "That is a little bit too much for us " + amount + " pizzas"; pizzaOrderToOven = false; } else { pizzaOrderToOven = true; } out:print "Pizza order was validated with outcome " + pizzaOrderToOven + "\n"; if(pizzaOrderToOven == false) { out:print "Reject reason is " + rejectReason + "\n"; } else { out:print amount + " " + pizza + " pizzas going to the oven\n"; } </script> </scriptTask> <sequenceFlow id="flow2" sourceRef="validateOrder" targetRef="theEnd" /> <endEvent id="theEnd" /> </process> </definitions> In the script code a logging line is printed with the out:print keyword. In the log line we use the customerName variable, which is a process variable that has to be passed as an input variable to the process instance. You can directly use process variables in the scripting logic. The amount variable is another example of a process variable. If the amount of ordered pizza’s is more than 10, a new process variable pizzaOrderToOven will be created and set to false. Else, if the amount is less or equal to 10 the pizzaOrderToOven process variable is set to true. So now we’ve defined our first little process, but what’s the next step? Well, a good practice would be to test this process using the powerful unit testing functionality of Activiti. public class PizzaScriptTaskTest {The unit test is very compact, but does actually quite a lot of things, let’s walk through them. First the ActivitiRule class is a JUnit 4 TestWatchman subclass where the Activiti engine is started and ended automatically for you. The only thing the ActivitiRule needs to do this is a configuration file where the Activiti database is defined and a couple of optional additions like a job scheduler and a mail server. The configuration for our unit test uses an in-memory H2 database and looks like this. @Rule public ActivitiRule rule = new ActivitiRule("activiti.cfg-mem.xml"); @Test @Deployment(resources={"pizzaOrderSimple.bpmn20.xml"}) public void submitPizzaOrderNotValid() { Map<String, Object> variableMap = new HashMap<String, Object>(); variableMap.put("customerName", "JohnDoe"); variableMap.put("address", "1st Street 7623"); variableMap.put("pizza", "Margherita"); variableMap.put("amount", 11); RuntimeService runtimeService = rule.getProcessEngine().getRuntimeService(); String processID = runtimeService.startProcessInstanceByKey("pizzaorder", variableMap).getId(); assertNotNull(processID); } } <?xml version="1.0" encoding="UTF-8"?>The configuration file defines the JDBC connection to the Activiti engine database; in this case a H2 database. As you can see the database schema is created automatically the first time with the create-if-necessary schema strategy. The job scheduler is turned off because we don’t need it yet. In our larger example later on, we’ll do use a job. <activiti-cfg> <database type="h2" schema- <jdbc url="jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000" driver="org.h2.Driver" username="sa" </database> <job-executor </activiti-cfg> So the Activiti engine is started with a single line in our unit test. Then we see an @Deployment annotation above our unit test method. This makes sure the BPMN 2.0 process definition we created earlier is deployed to the process engine before the unit test method is executed. In the script task of our simple pizza order process we use a couple of process variables like customerName and amount, so in the unit test we create a Map of process variables that are used to start the new process instance. To start a new process instance we need to get hold of a RuntimeService instance, which can be obtained via the ActiviRule instance. With the startProcessInstanceByKey method you can then start a new process instance of our pizza order process definition with the Map of variables. When you’ve imported the source code pizza-order project in your IDE (for example Eclipse after you used Maven to configure the project with mvn eclipse:eclipse), you can execute this unit test right away and see that the Activiti engine is started very quickly and the process definition is deployed and started within the second. Powerful stuff right? Implementing a full pizza order process with Activiti The simple pizza order process example provides a good overview of the basic functionality Activiti provides to deploy and start new process instances. But to get even more information about Activiti it’s good to look at a larger example. We’ll implement a more complex pizza order process, which looks like the following process model created with Activiti Modeler. As you can see we’ll still use our validate order script task, but we’ll enhance the rest of the process definition with Java service tasks, an exclusive gateway, user tasks and a boundary timer event. We’ll make nobody happy with showing the full BPMN 2.0 XML for this example, but we’ll walk through every part of the process step-by-step. The full XML implementation is of course available in the source code of this article. To prevent yourself from working with the raw BPMN 2.0 XML you can use the Activiti Designer to design and generate this for you. The first enhancement we see in the process model is an exclusive gateway. An exclusive gateway can be used to implement conditional logic to decide which sequence flow must be taken in the process flow. In this example, if the pizza order amount is more than 10 the pizzaOrderToOven process variable is set to false and the process must be ended. The BPMN 2.0 XML for this exclusive gateway looks like this. <exclusiveGateway id="validateGateway" />As you can see, if the pizzaOrderToOven process variable is true, the next step will be the calculate price task, otherwise the process will be ended. In addition to the great scripting functionality, it’s also very easy to add Java logic to the process via a Java service task. To calculate the price of the pizza order we’ll use a Java service task. As this is no standard BPMN 2.0 construct, Activiti provides this functionality as an extension to the BPMN specification. In XML this looks like the following snippet. <sequenceFlow id="flow3" sourceRef="validateGateway" targetRef="calculatePrice"> <conditionExpression xsi:${pizzaOrderToOven == true}</conditionExpression> </sequenceFlow> <sequenceFlow id="flow4" sourceRef="validateGateway" targetRef="theEnd"> <conditionExpression xsi:${pizzaOrderToOven == false}</conditionExpression> </sequenceFlow> <serviceTask id="calculatePrice"That’s not difficult, is it? Just a fully qualified class name that includes the package name. To implement the CalculatePriceService it’s not complex either, because with the JavaDelegation interface of the Activiti framework we only have to implement an execute method. For our pizza order example the price calculation is implemented as follows. activiti: public class CalculatePriceService implements JavaDelegation {The input parameter of the execute method is a DelegateExecution instance, where you retrieve information from the running process instance. A typical usage of the DelegateExecution is to retrieve and set process variables in the process instance. In this example we retrieve the pizza and amount process variables to calculate a price and set this result as a new process variable in the process instance. @Override public void execute(DelegateExecution execution) throws Exception { String pizzaName = (String) execution.getVariable("pizza"); int amount = (Integer) execution.getVariable("amount"); System.out.println("Calculating price for " + amount + " " + pizzaName + " pizzas"); int price = 0; if("margherita".equalsIgnoreCase(pizzaName)) { price = amount * 6; } else { price = amount * 8; } System.out.println("Price will be " + price); execution.setVariable("price", price); } } Quickly on to the next step of the process, baking the pizza’s, which is represented as an user task in the process definition. A user task is assigned to a specific user or a list or group of candidate users. The user has to claim and complete the user task before the process will progress to the next state. With the Activiti Explorer you can claim and complete user tasks via a web interface, as we’ll see later on when we’ve complete the process. An additional function of Activiti is the form implementation. You can define a form with simple HTML tags and configure it on a user task. Then when a user wants to complete that user task in the Activiti Explorer that form is shown to the user. In our pizza order example we’ll use two forms, one to start the process and one to implement the bake pizza user tasks. The forms look like the following figures in the Activiti Explorer. Now let’s look at how we can implement the second form where we have to enter the expected delivery time and see how easy this is. <h1>Get the pizza to the oven</h1>You only have to define simple HTML elements to implement the form. We can use process variables directly in the user task form like we do with the ${customerName} variable. And to define an input field we use the standard HTML input element. The delivery field value will automatically be available as a process variable when the user task form is completed. By specifying an additional hidden input element you can make the input field required. Now let’s look at how we use this task form in the BPMN 2.0 XML of our pizza order process. <p> ${customerName} would like to get ${amount} ${pizza} pizzas. </p> <p> Total price : ${price} </p> <table> <tr> <td> <label> Expected delivery time:<br/> <input type="text" name="delivery" value="" /> <input type="hidden" name="delivery_required" value="true" /> </label><br/> </td> </tr> </table> <userTask id="bakePizzas" name="Bake the pizza"The user task is assigned to fozzie, one of the default users when you install Activiti. The pizzaBake.form user task form is the one we defined in the previous code snippet. So this is all we have to do to assign an user task with a form to a specific user. activiti:formKey="pizzaBake.form" activiti: We are approaching the end of our pizza order example process, but we have to look into two more parts of the process definition. First the boundary timer. A boundary timer can be used to define a specific time period on a task and when that time period has passed, a boundary timer event will fire. In the pizza order example we have defined a boundary timer of 1 minute on the bake pizza user task of fozzie. When fozzie doesn’t complete its user task within 1 minute, the user task is deleted and a task in the management group is inserted. The management group is also a default group defined when you install Activiti and kermit is the only user, which is part of that group. So Kermit can claim and complete the user task after the boundary timer of 1 minute is fired. In BPMN 2.0 XML it looks like this. <boundaryEvent id="escalationTimer" cancelActivity="true" attachedToRef="bakePizzas">The boundary timer event is attached to the bakePizzas user task we defined in the previous code snippet. So after 1 minute the timer event is fired and the bakePizzas user task is cancelled e.g. deleted. To attach another task to a boundary timer event you just use a sequence flow. In this case we defined another user task that reuses the same user task form and has the management group configured as its candidate group. <timerEventDefinition> <timeDuration>PT1M</timeDuration> </timerEventDefinition> </boundaryEvent> <sequenceFlow id="flow6" sourceRef="escalationTimer" targetRef="bakePizzasByManager" /> <userTask id="bakePizzasByManager" name="Fozzie is sleeping?" activiti:formKey="pizzaBake.form" activiti: One more part to explain and that is the twitter service task. Just like the calculate price service task we have implemented a twitter task with a Java service task. We already know how to implement a Java service task in BPMN 2.0 XML, so we’ll just take a look at the Java class implementation. public class TwitterService implements JavaDelegation {This service implementation uses the Twitter4J framework to communicate with your Twitter account. It takes a couple of process variables and twitters two messages on to your account about the pizza order. Notice that you still have to overwrite the access token and access token secret keys. In the Twitter4J examples this OAuth authentication method is explained. @Override public void execute(DelegateExecution execution) throws Exception { String customerName = (String) execution.getVariable("customerName"); int amount = (Integer) execution.getVariable("amount"); String pizza = (String) execution.getVariable("pizza"); int price = (Integer) execution.getVariable("price"); String delivery = (String) execution.getVariable("delivery"); AccessToken accessToken = new AccessToken("YOUR TOKEN", "YOUR TOKEN SECRET"); Twitter twitter = new TwitterFactory().getOAuthAuthorizedInstance( "RclhVMQOSyapUD4sFtyTfg", "t7fI6QjDDhb13dxiOG2MIJY5KGBJKZfxkBEl2bAOc8", accessToken); twitter.updateStatus("#Activiti engine created pizza order: " + customerName + " ordered " + amount + " " + pizza + " pizza's for DZone article"); twitter.updateStatus("Oh yes the " + pizza + " pizza's will cost: " + price + " euro and will be delivered around " + delivery); } } To make it easy for you to generate these tokens the source code of this article contains a CreateTwitterAccess unit test. This unit test generates a Twitter URL you must open in your web browser. After you’ve received a PIN code, you have to enter this PIN code in the console of the unit test. Then the unit test will generate your access tokens, which you can copy to the TwitterService implementation. We’ve covered the whole pizza order process and talked about script, Java service, and user tasks, the exclusive gateway, the boundary timer and the user task forms. So now it’s time to deploy and demo our example process. To have everything running you must execute the following steps: - Download Activiti RC1 from. - Run the installation script by running the ant command in the setup directory of the Activiti distribution. This will install and startup Activiti. - Stop the Tomcat server that’s started as part of the installation script. - Run the default target deploy.pizzaorder of the build.xml that’s available in the root of the source code package of this article. This will deploy the process definition to the Activiti engine. - Copy the groovy-all jar from the examples/activity-engine-examples/libs-runtime directory of the Activiti distribution to the app/apache-tomcat-6.0.29/lib directory. This is needed because the Groovy scripting engine is not installed in Tomcat by default. - Copy the twitter4j-core jar from your Maven repository (by default USER_HOME/.m2) to the Tomcat lib directory. The twitter jar is available in the Maven repo in the org/twitter4j/twitter4j-core/2.1.6 directory. This makes sure that the Twitter4J library is available on the classpath. - Copy the pizzaorder.jar from the source code dist directory to the app/apache-tomcat-6.0.29/lib directory. This is needed because the service task classes used in the pizza order process must be on the classpath of Tomcat. - Start the Tomcat server again by running the Ant command ant tomcat.start from the setup directory of your Activiti distribution. When you are done with these steps you can go to the Activiti Explorer – - and login with for example fozzie, fozzie. Because we also need the fozzie user for our first user task this is handy later on. Then click on the processes tab and click on the start form link you can find with the pizza order process. There you can fill-in the start form and click for OK. Then go back to the task tab and you should see a screen similar to the next figure. This means that the pizza order process has successfully created a user task for fozzie. If you look in the Tomcat console you’ll see the logging lines we added to the tasks. Now you can choose if you want fozzie to complete the task or you can wait a minute and the task must be completed by kermit. When the task is completed, the 2 messages are tweeted about the pizza order process that look like this. Activiti, a full-blown BPMS Now that you have seen Activiti Engine in action, just a few words about the tools surrounding the engine that make Activiti a full-blown BPMS. The business process life cycle starts with Activiti Modeler, the Web 2.0 based modeling tool which allows you to drag and drop BPMN 2.0 constructs on a modeling pane to create your process model in your favorite browser. Next tool down the line is Activiti designer, an Eclipse based development tool to add technical detail to the model, like for example binding a Java class to a service activity. It provides unit test functionality to run the BPMN 2.0 process you are working on, on the Activiti engine as well as support to export business archive files to run on the engine when you are all done. When processes finally run on there you need Activiti Probe. Probe provides administrating capabilities by providing a view on the process engine environment at any given moment, showing the status of the engine from a technical standpoint. For the end users of the business process application there is Activiti Explorer as you have seen in the pizza order example. With Explorer workflow is implemented. Users get to see the list of tasks they can work on and processes they are allowed to start. The last tool in the Activiti stack is Cycle. Activiti Cycle facilitates the collaboration between business folks and IT in a way that fits with how people work in practice. It is basically a tool that focuses on the big picture. This means that all aspects of business related content, software development and software deployment are taken into account. With Cycle, Activiti created a glue "layer" between the different tools and repositories that are used in business process modeling projects as you can see in the tool overview picture. All these tools of Activiti deserve an article for themselves to demonstrate the functionality they provide to support the business process lifecycle. As mentioned before, we will publish a few of these articles in the coming weeks/months, so check them out if Activiti impresses you as much as it impresses us and until that time order some pizzas! About the Authors Tijs Rademakers Tijs works as a software architect for Atos Origin and is the lead developer of the Activiti Designer project. His focus is on BPM and open source Java development. Tijs is the author of the Open Source ESBs in Action book and is a frequent speaker on Java conferences Twitter Ron van Liempd As a software architect at Atos Origin Ron focuses on translating the business needs into IT realizations. His focus is on business rules management systems and BPM and works on the Activiti Designer project. Twitter Tijs and Ron are currently writing the Manning book 'Activiti in Action' and blog about it on. References • Find out more about BPMN 2.0 at • Detailed information about working with Activiti: • On Activiti Designer: • Good intro on Activiti Cycle: • Cool demo of Activiti Explorer on the iPad: And Activiti Modeler on the iPad as well: Claude Lalyre replied on Mon, 2010/11/22 - 5:06am Tijs Rademakers replied on Tue, 2010/11/23 - 9:25am Fadi Al-katout replied on Wed, 2011/10/26 - 2:43pm Great article, but I have a question about the users and groups in activity, is it possible to make activity work with my own users and groups that I already have in my application? Thanks in advance
http://java.dzone.com/articles/tutorial-devoxx-activiti-bpmn
CC-MAIN-2014-41
refinedweb
3,857
54.42
Introduction to Spring MVC for Web development Introduction Spring MVC is a framework used for the development of web applications using java ee technology. It comes under the spring project umbrella which, is a bunch of projects built around spring architecture principles. Spring follows the best coding practices and design patterns, which allow them to make their framework easy to use and learn. Spring contains a bunch of projects like Spring Security, Spring Boot, Spring Microservices, etc. Each project focuses on a different aspect of application development. Spring MVC is one such project which focuses on the web development aspect of an application. So what it is, we’ll see it in the following sections. But before that, there are some prerequisites you should know to learn spring. Which is Java off course, which I assume you know already. And other is servlet and JSP. If you wish, you can go to the official website of JSP and Servlet, of Java ee to learn about Servlet and JSP technology. Or you can have a look at my previous blog, which gives a brief about those technologies. What is MVC? MVC in Spring MVC stands for Model View Control. It is a design pattern Spring MVC follows and forces it on web developers to built web applications. Now we’ll see one by one what this Model, View, and Controller is. - Model : Model is simply data or objects that are passed to the Controller and view. In order them to communicate with each other. These Models can be bean classes, a simple java POJO’s, or data coming from a database doesn’t matter. Model is just a representation of data that can be understood by both the view and controller. - View : The view is a result that the user sees after the completion of his request. The view is also an interface user interacts with to use the web application. Generally, this view is web pages that give the user an interactive way to communicate with the application. - Controller : The controller is an section where we write our core business logic. The controller gets the data from the View, processes it, and generates the model. Then passes that model back to the View to give a response back to the user. Spring MVC Workflow - The Spring MVC is created on top of a Simple Servlet and JSP technology. The Front Controller or Dispatcher servlet is an entry point to any spring MVC application. The front controller is a Java class who extends The HttpServlet class internally and acts as a doorway for the request coming in. and forwards those requests to the appropriate internal controller. - After The request is forwarded to the controller, it performs internal business logic on that request and generates the appropriate response to send back to the Front controller. This response can be a java bean or a Collection of beans. This response is encapsulated inside of the model to send it. Here the model is just a container for data. - After this model containing data is forwarded to the view resolver, which forwards this model to the appropriate view template so that it can generate an appropriate web page to send as a response to a client using the data generated by a controller. Why use Spring MVC over other frameworks? - The Spring MVC framework is very flexible and easy to use. Which can be extended easily for enterprise-level applications - The Spring MVC framework belongs to the spring family of projects. Hence it works very well with other spring technologies and projects. - In the Spring MVC reusable business logic can be generated. Hence no code duplication is present. - Spring is annotation based. Hence it is easy to configure your existing code base by adding an appropriate annotation to them. There are lots of other features spring MVC provides. You can check them out on their official site. Links are in the reference section. Annotations In Spring MVC In this section, we’ll talk about some of the most commonly used annotations for Spring MVC. Since this blog is just a tip of an iceberg. Please go to spring.io for more detailed documentation. There are plenty of other annotations, which are useful and require a lot of space in a blog. Hence we’ll be sticking with a few of them. Since they are most frequently used in web development. @Controller : If you annotate a class with the @controller annotation, it indicates that the annotated class is a Controller, the controller is an extension of @component annotation from spring core. This annotation helps in the component scanning of spring. Component scanning means that if you annotate, the class with component annotation spring will register that component in its registry so that you can ask spring to give that to you whenever you need it. package com.example; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; @Controller public class ExampleController { // request mappings } @RequestMapping : This annotation is used on methods to map those methods to appropriate web requests. These methods are written inside of a controller class. Request mapping accepts lots of parameters. You can refer to this link to know about those parameters. package com.example; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; @Controller public class ExampleController { @RequestMapping public String helloWorld() { return "hello.jsp"; } } There are plenty of other annotations in Spring MVC. You can see their use cases and know more about them on spring’s official website here. Conclusion Spring MVC is a great tool for you if you want to build Enterprise-level Java applications. It is robust, versatile, and easy to use. Spring is a widely known name nowadays every, java developer knows to some extent about it. Using spring MVC helps you connect easily with other cool projects from springs. So that you can harness the power of spring. This blog is just an introduction to this big giant technology spring MVC. There is always more to know and learn about it. Since Spring MVC gets updated quickly, check the spring.io for more details and their other projects. And thank you for reading. References :
https://akashmjain.medium.com/introduction-to-spring-mvc-for-web-development-25237a7bbec0
CC-MAIN-2021-21
refinedweb
1,034
66.74
Serial data is passed byte by byte from one device to another. It’s up to you to decide how each device (computer or microcontroller) should interpret those bytes, when the beginning of a message is, when the end is, and what to do with the bytes in between. Before you can pass messages, however, the two devices need to agree on the rate at which they’ll exchange data, the number of bits per byte, etc. Generally, 8 bits, no parity, one stop bit is a good standard, and somewhere between 2400 and 9600 baud is a decent rate for small amounts of data. Computers use numbers to represent alphanumeric characters (letters and numbers and punctuation) in bytes. There is a standard code, called the ASCII code (it stands for American Standard Code for Information Interchange), that assigns each number or letter a specific byte value from 0 to 255. For example, capital A is ASCII value 65. This chart can be found in many computer manuals’ indexes, and all over the place online. Here’s one online version. ASCII is a very common code (though not the only one), and is used by many devices as part of their serial protocol. As you can tell by the name, ASCII is very much biased to English alphanumeric communication. It’s convenient, in that any ASCII symbol can be represented in one byte, but it’s also limited. Other alphanumeric systems with more characters than the limited number in the Latin (English) are not represented in ASCII. ASCII has been superseded by Unicode, but fortunately the most common form of Unicode, UTF-8, is compatible with ASCII. For more on Unicode, see this informative blog post by Umer Mansoor. If you’re only sending one number, and that number is less than 255, you know it can fit in a byte. This kind of message is easy. Just send the same byte over and over, and the computer can pick it up at any time. If you’re sending more than that (and you usually are), things are a little more complicated. The receiving computer has to know when the message starts and when it ends. Different serial devices will use different codes to perform different actions. If a device, like a tape deck, laserdisk player, etc. is serial controllable, there will usually be a section in its manual outlining the messages it expects to receive, and at what baud rates it expects to receive them. In Arduino, if you want to convert a byte’s raw value to the ASCII characters representing it, use the DEC modifier, like so: Serial.print(myVar, DEC); If the value of myVar was 134, this line would return three bytes, containing the characters “1”, “3”, and “4”. Once you’ve got the microcontroller programmed and hooked to the computer, the first thing you should do is test to see what the microcontroller is sending using the simplest program possible. On the mac, ZTerm does the job well, as does the screen command in the Terminal program; on the PC, HyperTerminal works well. Set the baudrate and settings to the same as what you’ve programmed the microcontroller set to, and connect. If there’s an option for flow control, choose ‘none’. You should see the ASCII representations of whatever bytes you’re sending. The biggest challenge to serial communication is making sure you get all the bytes, and that you get them in the right order. If the computer (PC or microcontroller) was doing something else at the instant that it should have been listening to the serial port, or it its input buffer has overflowed (i.e. if you’ve received more data than the computer has processed), you may lose some data. There’s no single way of making sure you get the right data, but there are some fundamentals you can check: 1. How many bytes am I sending? Am I receiving the same number of bytes? For example, if the microcontroller is sending 3 variables, for 3 sensors, the PC needs to receive all three in order to act on them. 2. Did I get the bytes in the right order? If the sender were sending “ABC” over and over, as “ABCABCABCABC” etc, it’s possible that the receiver might not start receiving at the beginning, and get “BCABCABCABCABCA” instead. This is a problem if, say, A is the right switch, B is the center switch, and C is the left switch. In order to avoid this, it’s sometimes useful to send some value at the start or the end of your data string that’s a constant number, and different from the other values. For example, if A can range from 0 to 100, and B can range from 0 to 100, and C can range from 0 to 100, perhaps you send 101 at the beginning of each string. In BASIC code, that might look like this: PICBasic Pro: A var byte B var Byte C var Byte headerByte var byte headerbyte = 101 main: ' generate values for A, B, and C here serout2 portc.6, 16468, [headerByte, A, B, C] goto main BX-24: Dim A as byte Dim B as Byte Dim C as Byte dim headerByte as byte headerbyte = 101 ' fill in the code to set up your serial port here do ' generate values for A, B, and C here call putQueue(OutputBuffer, headerByte, 1) call putQueue(OutputBuffer, A, 1) call putQueue(OutputBuffer, B, 1) call putQueue(OutputBuffer, C, 1) loop Wiring/Arduino: char A; char B; char C; char headerByte = 101; void setup() { Serial.begin(9600); } void loop() { // generate values for A, B, and C here Serial.print(A, BYTE); Serial.print(B, BYTE); Serial.print(C, BYTE); Serial.print(headerByte, BYTE); } Another alternative to this method of “punctuating” your serial message is to set up a “call and response” method. For example, the sending device may wait until it receives a request for data, then send one string of data, then wait for another request. This way, the receiver knows that it will only ever have one string of data in its buffer at a time. In this case, you need to make sure that the receiver’s serial buffer can fit as many bytes as the sender sends out in one string, and that the sender can receive a byte and wait before sending. If the microcontroller is the sender, you might make it wait like this. This example assumes the receiver will send the letter “A” (ASCII 65) when ready for new data: PicBasic Pro: main: ' wait for incoming data: serin2 portc.7, 16468, [inputVar] ' If we get a byte from the PC and it's 65 (ASCII "A"), ' send our data out: if inputVar = 65 then serout2 portc.6, 16468, [A, B, C] endif goto main BX-24: ' Find out if anything is in the queue. gotaByte = statusQueue(inputBuffer) ' If there is data in the input buffer, ' get the first byte of it: if (gotaByte = true) then call getQueue(inputBuffer, inByte, 1) if inByte = 65 then ' send bytes out here: call putQueue(OutputBuffer, A, 1) call putQueue(OutputBuffer, B, 1) call putQueue(OutputBuffer, C, 1) end if else inByte = 0 end If Arduino: // Find out if anything is in the queue. if (Serial.available() > 0) { // If there is data in the input buffer, // get the first byte of it: char inByte = Serial.read(); if (inByte == 'A') { // send bytes out here: Serial.print(A, BYTE); Serial.print(B, BYTE); Serial.print(C, BYTE); } } 3. Are my bytes part of a larger variable? Let’s say you’re sending an integer variable from the BX-24. An integer is two bytes long, so the PC will receive two bytes when you send it an integer. To convert those two bytes to a single number again, use this formula: integerVar = (byte1Var * 256) + byte2Var If this confuses you, think of it as analogous to how you convert a number in hexadecimal (base 16) to a decimal (base 10) value. In this case, each byte is a digit in base 256. Try it by sending a constant number larger than 256 first, to make sure you have the formula right. 4. Is my data getting garbled in transit? Serial communication is tricky, in that you need to have both sides properly grounded to a common ground, and both receive and transmit wires properly connected and insulated. all kinds of electrical noise can get in the system and interfere with your transmission. To minimize the time you spend troubleshooting, it’s best to start out by sending a constant message. For example, if you know you’re planning to send out three bytes of data to represent sensor data as above, don’t start out by sending the actual values from the sensors. Start out by sending constant values. For example, PicBasic Pro: outByte var byte outByte = 65 serout2 portc.6, 16468, [outByte] outByte = 66 serout2 portc.6, 16468, [outByte] outByte = 67 serout2 portc.6, 16468, [outByte] BX-24: dim outByte as byte outByte = 65 call putQueue(OutputBuffer, outByte, 1) outByte = 66 call putQueue(OutputBuffer, outByte, 1) outByte = 67 call putQueue(OutputBuffer, outByte, 1) Arduino: Serial.print(65, BYTE); Serial.print(66, BYTE); Serial.print(67, BYTE); When you see that the receiver is getting the same values you’re sending consistently, then you can change your code so that you care sending the actual values of the sensors. the hardware setup alone introduces enough variable conditions and points of possible failure to begin with. Because of this, it’s best to make your code (the thing you can control) constant until you have the hardware under control.
https://www.tigoe.com/pcomp/code/communication/interpreting-serial-data-bytes/
CC-MAIN-2019-22
refinedweb
1,623
69.52
David Turner has modified FreeType to be able to render sub-pixel decimated glyphs using different methods of filtering. Fontconfig needs new configurables to support selecting these new filtering options. A patch follows that would correspond to one available for Cairo in bug 10301. Created attachment 12998 [details] [review] Patch against fontconfig-2.5.0 adding options. This patch adds a new option with 4 configuration constants. It is modified from one available at Ubuntu developers originated this patch, albeit with the error of creating name constants that conflicted with existing ones. This patch modifies the original by prefixing the constant strings with "lcdfilter". the enumerations use lcdfilter and the define lcd_filter is that normal? Created attachment 13254 [details] [review] Patch with correct define. (In reply to comment #2) > the enumerations use lcdfilter and the define lcd_filter is that normal? > Nope, that was wrong. I've changed the patch. I discussed this issue with Keith the other day: 17:48 < syp|> keithp: what do you think about ? 17:49 < keithp> syp|: fontconfig doesn't need these values, but cairo can add them itself if desired 17:50 < keithp> fontconfig allows applications to add their own variables to fontconfig 17:51 < keithp> you can do precisely the same operation from cairo code 17:51 < keithp> just add the values and constants using the existing public fontconfig API 17:51 < keithp> then fontconfig will happily parse those values appropriately 17:52 < syp|> keithp: is this way of doing already used for other configuration options? 17:53 < keithp> syp|: not in cairo, but I think Xft may use this 17:53 < keithp> yeah, FcNameRegisterConstants, FcNameRegisterObjectTypes 17:55 < syp|> ok thanks for the information. so this particular bug should be WONTFIXed? 17:57 < keithp> please see if you can make it work from cairo, then we can close the bug So I tried to use the FcNameRegisterConstants and FcNameRegisterObjectTypes functions to register the lcdfilter object type and the lcdfilter* constants. However, I could see a time dependency issue when doing this: FcNameRegisterConstants() has to be called before the fontconfig configuration file is read, otherwise the parser is not aware of the defined constants and do not fill the pattern with them. This can work when using the toy API and registering the constants from _cairo_ft_scaled_font_create_toy(), because the fontconfig configuration file is parsed later when FcDefaultSubstitute() is called (so it knows about the constants at this time). But it fails when using Pango, because it deals with FontConfig before calling into the cairo API, so that the fontconfig configuration is already parsed when we are in cairo-ft-font.c I agree that this probably isn't the best place for these options. This bug is here because the fontconfig patch was part of the original Ubuntu patchset, and I was unaware it was possible to register configuration objects from outside fontconfig itself. Sylvain, the fontconfig configuration never really worked with the Ubuntu patchset. In fact, the cairo patch currently only uses the fontconfig options for name-to-constant mapping. This said, I'd say there are two courses of action we can take from this standpoint:. With either path, I really agree that, given the availability of that config API, this patch would be unnecessary, and this bug can probably be closed. (In reply to comment #5) > (In reply to comment #6) > > While I personally agree that the "Default" filter is adequate, we still need to have at least the "Legacy" filter available. The loss of this more-known, thinner filter would probably cause outrage among those who consider the new filters "too blurry." In that regard, I think that having "Default" as the default, but having only Xrdb configuration to set the other filters would be sufficient. As been already pointed out, registering in a library is not possible. However if we are going to do it in cairo anyway, in a cairo namespace, then we can forget about registering constants and types and just work with strings. That's whay pango does for pangogravity element for example. I still think this thing belongs into fontconfig because FreeType exposes this configuration. It's not a cairo-only thing. well, given that we can't make this work in any library, it seems like the whole plan of building application-specific fontconfig constants is a loss. Sigh. I'll plan on adding this stuff for 2.6, please put together a patch that includes the necessary documentation additions and I'll apply it. Created attachment 13713 [details] [review] lcd filter type objects/constants and documentation update As discussed on irc, I renamed the constants from lcdfilter* / FC_LCD_FILTER_* to lcd* / FC_LCD_*. Applied in 53aec111074cf7b46d15eb84a55791d3c95bc15e bug/show.html.tmpl processed on Feb 19, 2017 at 20:58:44. (provided by the Example extension).
https://bugs.freedesktop.org/show_bug.cgi?id=13566
CC-MAIN-2017-09
refinedweb
797
53.21
14 October 2009 18:08 [Source: ICIS news] By Nigel Davis ?xml:namespace> Earnings before interest and tax (EBIT) could rise by between 20% and 40% for many, it suggests, due to strong volume growth and the momentum behind cost cutting. Sales volumes could lift by between 8% and 10%, “even with only modest GDP growth in 2010 and no strong inventory restock”. Sounds good, at least from where we are. But, relatively speaking, the climb back from the trough of the downturn cannot but take time. BASF did well in the third quarter to hold the drop in EBIT, before charges and gains, to 20%. The sales fall year on year was 19%. Quarter to quarter in 2009, EBIT was up 9.5% and sales up 2.4%. That better-than-consensus performance helped drive the shares higher. BASF CEO Jurgen Hambrecht was cautious in his comment on the outlook, though, despite the profits gain. “Although the mood is slightly brighter and we are climbing out of the trough, the recovery remains slow and fragile,” he said in a statement. “Economic developments in the coming months remain difficult to predict,” the company said. The chemicals business through most of this year has been driven by demand from The industry is well placed to take advantage of what appears to be simultaneous manufacturing upturn in both the Deutsche Bank’s chemicals analyst said he expects third-quarter sector EBIT to be a considerable 20-30% higher than in the second quarter. Volumes and cost cutting will offset eroding foreign exchange support, increasing raw material costs and marginally eroding product prices, he adds. “On outlook we expect less pessimism but no optimism given the conservative nature of most management teams and lack of clear evidence of sustained recovery.” So don’t expect too much in the way of a positive outlook from the companies themselves – that appears to be the message – but note the percentage gains against a weakening sequence of quarters from the end of last year and through 2009. Clearly, the annual comparisons will start to look better from the fourth quarter of 2009 onward given the parlous state of business at the end of 2008. However, it looks as though commodities, with the exception of fertilizers, will come back strongest, driven by The bank also expects an improvement in specialities but notes the greater sequential quarter-to-quarter impact of product price erosion and raw material price pressure. In their home markets, European and US companies still struggle with a changed operating environment characterised by low inventories and reduced operating rates. Products rapidly can become tight if there are supply problems. In the most recent quarter, prices have moved against the background of supply constraints rather than with demand. Chemicals production in the EU was down 17.7% for the eight months to July compared with the similar period of 2008. The improvement, however, has accelerated since February, with the July 2009 shortfall year on year being 11.3%, according to EU statistics. The chemicals trade group Cefic calculates that petrochemicals production dropped lower again in July after gaining ground through most of the year. July production was down 13.8%. EU polymers production in July was down 15.9% year on year, basic inorganics down 20% and specialty chemicals down 3.1%. It depends very much on the portfolio and the exposure to In BASF released its stronger-than-expected third-quarter results early as it became clear that they were well ahead of consensus, and the move has raised expectations for the sector. The company will issue a full quarterly earnings statement on 29 October. Markets have stabilised at a low level, BASF said of its own businesses. “Impulses for growth are coming from Asia, especially
http://www.icis.com/Articles/2009/10/14/9255309/insight-basfs-better-than-expected-q3-raises-hopes-for-sector.html
CC-MAIN-2014-52
refinedweb
630
61.77
What's the difference and which are you supposed to use? I typically stick to else if unless I am using short tags. Is this correct?? What's the difference and which are you supposed to use? I typically stick to else if unless I am using short tags. Is this correct?? They both have the exact same behavior and either can be used unless you not using curly brackets, then you should be using just elseif There is this though:. From php manual. Similar question exist in . Though i am nt sure about the answer there as one of them suggest else if as else { if(){}} I always assumed they were the same, thanks for the heads up on the colon format though (+1 guys) - not that I'd ever use such an ugly form :) I use the colon format for all my PHP templates!! Curly braces in templates just doesn't make sense to me. Here's an example of my PHP templates: <div id="foo"> <h1>Heading</h1> <?php if ($bar == 5): ?> <div class="bar">Five!</div> <?php else: ?> <div class="baz"><?= $bar ?></div> <?php endif; ?> <div>My name is <?= $name ?></div> </div> Like I said ugly. :) I don't think any php conditional form in a template can be made to look nice. The closest I get to the colon form is twig's templater (my fave templater at the mo): {% if kenny.sick %} Kenny is sick. {% elseif kenny.dead %} You killed Kenny! You bastard!!! {% else %} Kenny looks okay --- so far {% endif %} I also see that you're using short tags (<?=) - makes sense for 5.4+ - I will be adopting this too. Had a few icky moments a couple of years back when a cheap (read: useless) host would not allow short tags and I had to do a full text replace. Sorry, but I'm not seeing why you think your version looks all that more impressive and clean than Now, now Dani, play nice. I never said I thought that 'my' version (re: twig) looks 'more impressive and clean'. I think I said that they were all ugly. Anyway, it's a personal thing, I just don't like the colon syntax - it's ugly to me as is seems unfinished somehow. It may be down to my big-waisted trousers, but I like braces. :) When it comes to coding style it is down to personal preference. I too was once burned by the php short codes not being supported on an OLDER server and since have NEVER recovered! I've seen the {% if kenny.sick %} Kenny is sick. {% elseif kenny.dead %} syntax before and thought to myself WTF is that because I'd never seen it before in PHP. I have a similar aversion to ternary operators etc. Perversely, I love terneries. Similar question exist in . Though i am nt sure about the answer there as one of them suggest else if as else { if(){}} Sorry to have to say this, but we all know about ‘stack overflow’; thank you! We are not lazy nor ignorant… we're just members of DaniWeb, and that's why we are hanging here @ while reading your post, not there! more over PHP is programming paradigm neutral. You can write object oriented PHP or functional PHP or procedural PHP or any mix of them together, and of course you can use things like GOTO and sections , trails or this alternative syntax (IF ELSEIF ENDIF). I will not take any further position on this, just to state that to decide things like that (in any language) need a bit caution because the programming culture that you will build will follow you. I was just reading up somewhere that elseif is slightly faster than else if???? I don't have experience writing programming languages, but from a logical standpoint this sounds like it could make sense. Since white space separates tokens, elseif is just one token the language needs to process, and should just execute the behavior one would expect of it. This is as opposed to else if being two separate tokens, each with its own behavior. I can't imagine the difference being noticeable in a practical sense, though. Did the piece you read mention how many else ifs you would have to run through before you noticed the change in speed? But I'm with diafol on this one -- I started programming with C++ and have always been loyal to using {}'s when possible. And I love using the ternary operator, for whatever dumb reason ^.^
https://www.daniweb.com/programming/web-development/threads/457349/else-if-or-elseif
CC-MAIN-2018-47
refinedweb
759
81.83
You're used to structured data types more complex than Perl's arrays and hashes, such as C's structs and Pascal's records. You've heard that Perl's classes are comparable, but you aren't an object-oriented programmer. Use the standard Class::Struct module's struct to declare data structures reminiscent of those in the C programming language: use Class::Struct; # load struct-building module struct Person => { # create a definition for a "Person" name => '$', # name field is a scalar age => '$', # age field is also a scalar peers => '@', # but peers field is an array (reference) }; my $p = Person->new( ); # allocate an empty Person struct $p->name("Jason Smythe"); # set its name field $p->age(13); # set its age field $p->peers( ["Wilbur", "Ralph", "Fred" ] ); # set its peers field # or this way: @{$p->peers} = ("Wilbur", "Ralph", "Fred"); # fetch various values, including the zeroth friend printf "At age %d, %s's first friend is %s.\n", $p->age, $p->name, $p->peers(0); The Class::Struct::struct function builds struct-like classes on the fly. It creates a class of the name given in the first argument, complete with a constructor named new and per-field accessor methods. In the structure layout definition, the keys are the names of the fields and the values are the data type. This type can be one of the three base types: '$' for scalars, '@' for arrays, and '%' for hashes. Each accessor method can be invoked without arguments to fetch the current value, or with an argument to set the value. For a field whose type is an array or hash, a zero-argument method invocation returns a reference to the entire array or hash, a one-argument invocation retrieves the value at that subscript,[1] and a two-argument invocation sets the value at that subscript. [1] Unless it's a reference, in which case it uses that as the new aggregate, with type checking. The type can even be the name of another named structure—or any class, for that matter. Because a class constructor doesn't have to be named new, if a component of your class is another object class, you'll have to invoke that named constructor yourself. use Class::Struct; struct Person => {name => '$', age => '$'}; struct Family => {head => 'Person', address => '$', members => '@'}; $folks = Family->new( ); $folks->head($dad = Person->new); $dad->name("John"); $dad->age(34); printf("%s's age is %d\n", $folks->head->name, $folks->head->age); You can pass the constructors created by Class::Struct initializer pairs: $dad = Person->new(name => "John", age => 34); $folks->head($dad); Internally, the class is implemented using a hash, just as most classes are. This makes your code easy to debug and manipulate. Consider the effect of printing out a structure in the debugger, for example. If you use the Perl debugger's x command to dump out the $folks object you've just created, you'll notice something interesting: DB<2> x $folks 0 Family=HASH(0xcc360) 'Family::address' => undef 'Family::head' => Person=HASH(0x3307e4) 'Person::age' => 34 'Person::name' => 'John' 'Family::members' => ARRAY(0xcc078) empty array Each hash key contains more than the just the name of the method: that name is prefixed by the package name and a double-colon. This convention guards against two classes in the same inheritance hierarchy using the same slot in the object hash for different purposes. This is a wise practice to follow for your own classes, too. Always use the package name as part of the hash key, and you won't have to worry about conflicting uses in subclasses. If you'd like to impose more parameter checking on the fields' values, supply your own version for the accessor method to override the default version. Let's say you wanted to make sure the age value contains only digits, and that it falls within reasonably human age requirements. Here's how that function might be coded: sub Person::age { use Carp; my ($self, $age) = @_; if (@_ > 2) { confess "too many arguments" } elsif (@_ = = 1) { return $self->{"Person::age"} } elsif (@_ = = 2) { carp "age `$age' isn't numeric" if $age !~ /^\d+/; carp "age `$age' is unreasonable" if $age > 150; $self->{'Person::age'} = $age; } } Using the principles outlined in Recipe 12.15, you can provide warnings only when warnings have been requested using warnings::enabled. Once your module has registered its package as a warnings class with use warnings::register, you can write: if (warnings::enabled("Person") || warnings::enabled("numeric")) { carp "age `$age' isn't numeric" if $age !~ /^\d+/; carp "age `$age' is unreasonable" if $age > 150; } You could even complain when warnings are in force, but raise an exception if the user hadn't asked for warnings. (Don't be confused by the pointer arrow; it's an indirect function call, not a method invocation.) my $gripe = warnings::enabled("Person") ? \&carp : \&croak; $gripe->("age `$age' isn't numeric") if $age !~ /^\d+/; $gripe->("age `$age' is unreasonable") if $age > 150; The Class::Struct module also supports an array representation. Just specify the fields within square brackets instead of curly ones: struct Family => [head => 'Person', address => '$', members => '@']; Empirical evidence suggests that selecting the array representation instead of a hash trims between 10% and 50% off the memory consumption of your objects, and up to 33% of the access time. The cost is less informative debugging information and more mental overhead when writing override functions, such as Person::age shown earlier. Choosing an array representation for the object would make it difficult to use inheritance. That's not an issue here, because C-style structures employ the much more easily understood notion of aggregation instead. The use fields pragma provides the speed and space of arrays with the expressiveness of hashes, and adds compile-time checking of an object's field names. If all fields are the same type, rather than writing it out this way: struct Card => { name => '$', color => '$', cost => '$', type => '$', release => '$', text => '$', }; you could use a map to shorten it: struct Card => { map { $_ => '$' } qw(name color cost type release text) }; Or, if you're a C programmer who prefers to precede the field name with its type, rather than vice versa, just reverse the order: struct hostent => { reverse qw{ $ name @ aliases $ addrtype $ length @ addr_list }}; You can even make aliases, in the (dubious) spirit of #define, that allow the same field to be accessed under multiple aliases. In C, you can say: #define h_type h_addrtype #define h_addr h_addr_list[0] In Perl, you might try this: # make (hostent object)->type( ) same as (hostent object)->addrtype( ) *hostent::type = \&hostent::addrtype; # make (hostenv object)->addr( ) same as (hostenv object)->addr_list(0) sub hostent::addr { shift->addr_list(0,@_) } As you see, you can add methods to a class—or functions to a package—simply by declaring a subroutine in the right namespace. You don't have to be in the file defining the class, subclass it, or do anything fancy and complicated. It might be better to subclass it, however: package Extra::hostent; use Net::hostent; @ISA = qw(hostent); sub addr { shift->addr_list(0,@_) } 1; That one's already available in the standard Net::hostent class, so you needn't bother. Check out that module's source code as a form of inspirational reading. We can't be held responsible for what it inspires you to do, though. perltoot(1), perlboot(1), perlobj(1), and perlbot(1); the documentation for the standard Class::Struct module; the source code for the standard Net::hostent module; the documentation for the use fields pragma; the documentation for the Alias module from CPAN; Recipe 13.3
http://www.yaldex.com/perl-tutorial/0596003137_perlckbk2-chp-13-sect-5.html
CC-MAIN-2018-05
refinedweb
1,268
52.63
Once inputs and outputs have been identified and quantities designated in the physical flow table and unit values have been estimated for inputs and outputs (or at least for those for which values can be estimated), it is possible to begin conducting the analysis by comparing costs and benefits in various ways to answer the questions asked by decisionmakers. This involves several steps constructing value flow tables (section 6.2); discounting benefits and costs (section 6.3); and computing financial and economic measures of project worth (section 6.4). The first step in the analysis is to combine the information from the physical flow and unit value tables into value flow tables such as described in chapter 3 (see tables 3.6 and 3.12). If a financial analysis is being conducted, this value flow table will be referred to as a cash flow table (see table 3.6), as only financial values and market prices have been considered in the analysis. If an economic efficiency analysis is being conducted, this value table will be referred to as a value flow table (see table 3.12), as economic values of costs and benefits have been considered. It is instructive to look at the main differences between the total value flow table for the economic analysis and the cash flow table used in the financial analysis (as described in chapter 2). Three types of adjustments need to be made in constructing an economic value flow table from a financial cash flow table. These adjustments involve 1. adding some costs and benefits that are not included in the cash flow table; 2. revaluing some costs and benefits in the cash flow table, using shadow prices instead of market prices; and 3. removing transfer payments from the cash flow table and adjusting for differences in timing of economic and financial costs and economic benefits and financial returns. The first two of these adjustments have already been discussed (the first in chapter 4 and the second in chapter 5). The third adjustment - the treatment of timing problems and transfer payments which show up in the cash flow table - is discussed below. The following topics are only of concern when the total value flow table is derived directly from the cash flow table. If the total value flow table is derived from the physical input and output tables and the unit value tables, then financial transactions that involve the transfers of money, such as taxes and subsidies that are important in financial cash flow tables, will not show up. The main types of transfer payments are taxes, subsidies, loan receipts, and repayment of loans and interest. Total value flow tables should be adjusted so that taxes and loan costs are not treated as costs and subtracted from benefits, and subsidies and loan receipts are not added to benefits or netted out of costs. In the case of loans, Squire and van der Tak (1975) explain the adjustments needed as follows: ... the payment of interest by the project entity on a domestic loan merely transfers purchasing power from the project entity to the lender. The purchasing power of the interest payment does reflect control over resources, but its transfer does not use up real resources and to that extent is not an economic cost. Similarly, the loan itself and its repayment are financial transfers. The investment, however, or other expenditure that the loan finances involves real economic costs. The financial cost of the loan occurs when the loan is repaid, but the economic cost occurs when the loan is spent. The economic analysis does not, in general, need to concern itself with the financing of the investment: that is, with the sources of the funds and how they are repaid. Similar arguments hold for taxes and subsidies, although one additional point needs clarification to avoid a common confusion. Chapter 5 argued that tariffs (taxes) and subsidies should be considered in deriving measures of local w.t.p., i.e., their effect on local prices should not be removed if they are expected to persist during the period of the project. Why is it now argued that taxes levied on the project and subsidies provided to the project should be removed (or not be considered) in the economic analysis? The answer is that two different considerations are being dealt with. In the case of derivation of values to use for inputs into the project and outputs from the project, the interest is in measures which reflect local w.t.p. for these items in the existing markets. The effect on w.t.p. of transfer payments is relevant, given the definition of economic value used here. On the other hand, in deriving the appropriate economic efficiency measure of project worth, the concern is with real resource flows and real flows of consumer goods or services coming from the project, valued in terms of the opportunity cost and w.t.p. value measures discussed earlier. A tax on the project output value merely means that some of the control over the benefits due to the project are transferred from the project entity to the public sector (government). The real benefits (the increases in consumer goods and services due to the project) do not change because a financial entity pays a tax. To society, the tax is not a cost associated with the project. To the financial entity it is a cost. Similar considerations hold in the case of subsidies given to the project (i.e., where the government shares the money cost of the project). The real costs (the opportunity costs) of the resources used in the project remain the same with or without a subsidy, and these are the costs which are of interest in the economic efficiency analysis. To summarize, taxes and subsidies do influence the w.t.p. for goods and services (and the size of the market and the local price which is established), but they do not alter the real costs of a project nor the real benefits produced by the project from the standpoint of society. The two considerations are quite separate. Depreciation should not be included in the economic analysis (nor should it have been included in the cash flow table). Depreciation is merely an accounting item and represents an internal transfer of some of the money profit from one account to another, in order to provide for replacement of assets. In the economic analysis, it is the real cost of an input that is relevant and its cost is entered at the time it is used in the project. Finally, it should be pointed out that if the value flow table for the economic analysis is derived directly from the cash flow table, the analyst has to be careful to adjust the timing of entries in the value flow table to take into account the fact that costs in the economic analysis occur at the time resources are actually used in the project, or taken out of alternative uses and benefits occur when outputs are consumed. In the financial analysis, costs occur when payments are made, and this may be at some time other than when resources (inputs) are actually used in the project. For example, a given input may be used in the project in year 5, but paid for in years 6 through 8 (on an installment basis). In the cash flow table, the cash outflow would occur in years 6 through 8. In the economic analysis, the value of the input should be entered in year 5. Similarly in the case of outputs or benefits. In the cash flow table for the financial analysis, the cash inflows or returns are entered when they actually occur. A given output may be paid for (to the project financial entity) after (or before) it is actually used (consumed). Thus, the return may appear in the cash flow table in a year that is different from that in which the output actually becomes available. In the economic analysis, the benefit should always be entered in the year in which the output is consumed or used. If all costs and benefits of a project occurred at the same point in time, then the analyst could merely add up costs, add up benefits, and compare them without further adjustment. However, costs and benefits of a project occur over the life of the project. Typically, the life of forestry projects can cover a substantial number of years. Project costs and benefits which occur at different points in time (in different years) cannot be directly compared. That is because value is intimately associated with time. The value of costs and benefits depends on when these costs and benefits occur. Thus, $1 of benefits occurring ten years from now is not as valuable in today's terms as $1 of benefits occurring immediately. If $10 is spent today and $15 is received back tomorrow, that may be acceptable. But if $10 is spent today and the $15 is not received back for 40 years, that may not be acceptable. The amounts are the same. The difference is time and people's willingness to accept delays in consumption. For most forestry project analyses, costs and benefits occurring in the same year are traditionally considered to have the same relative time value in terms of the present. That is, all costs and benefits occurring within a given year, even though they occur at different times during that year, are considered as having occurred at the same time. Thus, there is no problem in summing costs and benefits for any given year to determine net benefits. The problem is how to compare net benefits (costs) which occur in different years. Since time does have an influence on value, the analyst will want to develop information that permits the decisionmaker to compare the costs and benefits which occur at different times and to compare projects which have different cost and benefit streams over time. More specifically, the question is: How can a value occurring in some future year (year n) be equated with a value occurring in the present (year 0)? That is, how can the net benefit (cost) items occurring in the bottom line of the value flow table be compared? The common approach is to apply an adjustment factor to future net costs/benefit values that reflect their present value. The adjustment factor is derived from the accepted time value of money; it is commonly called the discount rate. The adjustment process is called discounting.[22] In the financial analysis, the going rate of interest is the one to use. That will vary from situation to situation. For example, the rate for smallholder tree growers will tend to be higher than the rate for well-established, low risk companies borrowing from regulated banks. In many cases, e.g., when looking at the financial attractiveness of farmer investments, the rate chosen will be only a rough approximation of the average of the various rates relevant to different individuals. In the case of more established entities operating entirely in the monetary sector, an estimated average bank lending rate may be appropriate. The analyst will have to use judgement in choosing an appropriate rate. There is no formula nor mechanistic means for deriving a rate. In the economic efficiency analysis consumer's willingness to pay for goods and services is used as the common yardstick for valuing both costs and benefits. Therefore, the discount rate used to discount costs and benefits should be the consumption rate of interest. This rate should measure the discount attached to having additional consumption next year rather than this year. The appropriate magnitude of this discount rate (or rate of interest) is determined by a number of factors, including society's preference for present consumption at the expense of more rapid growth (higher savings and investment now with higher consumption in the future).[23] As it turns out in practice, just as in the case of SER, the forestry project analyst will generally not have to be concerned with the derivation of an appropriate consumption rate of interest (or shadow discount rate) to use in the overall analysis of economic efficiency. The rate used should be one that is in general use in the project country. Thus, the analyst should obtain the appropriate discount rate from a central planning unit (e.g., national planning office) or from the analyst's administrative agency.[24] At the extreme, if there is no discount rate available from the central planning office at the time the analysis is being undertaken, the analyst can pick a rate such as 8 or 10 percent and use that in the main analysis, and then test the sensitivity of the worth of the project to alternative rates of discount. (As will be discussed later, one widely used measure of economic efficiency, the internal rate of return, does not directly require determination of the appropriate discount rate in order to calculate the measure.) There is sometimes a tendency to argue for use of lower discount rates in social or environmental forestry project analyses. The argument is that there are certain nonquantifiable benefits from such projects which justify the use of a discount rate that is lower than the one used to evaluate other projects in the general economy.[25] This is not recommended. Instead, analysts should use the established or acceptable discount rate used for evaluation of other projects and then discuss in qualitative terms the unique conditions associated with their project that make it different from other projects. This forces analysts and project planners to be explicit about their assumptions, thus avoiding the possible hiding of the efficiency shortcomings of a project behind a lower than normal rate of discount. The process of adjusting a future value to the present is called discounting. The resulting adjusted value is called present value (PV). The basic formula for discounting is the following: where PV = present value FVn = future value in year n i = discount rate (expressed in decimal form) n = number of years until future value occurs is commonly called the discount multiplieris commonly called the discount multiplier There are tables prepared and widely available which give the value of the discount multiplier (l/[l + i]n) for a wide range of interest rates and years. Further, it can also be calculated with simple pocket calculators, if they have a constant or a yx key, or a log function. Thus, the analyst will have no problems deriving the value of the discount multiplier for any number of years. For example, using box 6.1, 1/(1.08)2 is equal to 0.8573, and this value times $100 gives the result of $85.73 arrived at earlier. The basic discounting formula and tables are all that is needed to derive useful measures of project worth. However, in some cases other formulas - derived from the above basic formula - can provide useful shortcuts in carrying out calculations. For example, sometimes equal annual or periodic payments are associated with a project for a number of years during its life. In this case, there are formulas and tables which provide the present value of such payments without having to discount each of the annual or periodic amounts separately. Similarly, in some cases the analyst will want to find an annual equivalent of a given value occurring at some time, or to find the present value of an annual series of payments occurring every year. The most common of these formulas are shown in annex 6.1. Several indexes or indicators of project worth which take the influence of time into account (i.e., involving discounting) are in common use. There is no single measure of a project's worth which is universally accepted, since all share the characteristics of providing only partial information on project performance. Different indicators are needed and used for answering different questions. However, several measures are widely used in financial and economic analyses. These are the net present worth (NPV) and the internal rate of return (IRR). The measures are interrelated since all are derived from the same basic data, namely, the project's costs and benefits, as presented in the value flow tables. The analytical information they provide is, however, somewhat different because of the different ways in which they combine cost and benefit data. These measures are value neutral, and can be calculated for both financial and economic analyses. A Philippine tree-farming project and its value flow (table 6.1) will be used as an example to illustrate net present value. Using the basic discounting process described previously, a measure of the present value (PV) of all net benefits (costs) occurring in the various years of the project can be developed once an appropriate discount rate has been chosen. If a discount rate of 5 percent is used, the present value of each of the net future benefit (cost) entries is as shown in row 2 of table 6.1. Adding these items together (taking into consideration whether they are positive or negative) the NPV for the project is P29,310. What does this NPV of P29,310 indicate? It indicates that, given the assumptions concerning the opportunity costs of the resources used in the project and the w.t.p. for the project output, this project will return a net surplus of P29,310 of consumption benefits in present value terms taking into account the assumed consumption rate of interest (discount rate) of 5 percent, or the relative weight which society places on present consumption versus investment and future consumption. By using the discount rate it has ensured that the NPV result is comparable with those obtained for other projects that would involve different cost and benefit streams over time, i.e., the effect of different time values associated with consumption gained or foregone at different times in the future have been eliminated. In general, given the above, it can be said that in economic efficiency terms any project that provides a positive NPV is an efficient use of the resources involved, assuming that each separable component also has a NPV ³ 0 and the project is the least cost means of achieving the particular benefits. (See chapter 3 for review of the three conditions for economic efficiency.) While a project meeting these conditions is economically efficient, it still may not be chosen for implementation. That depends on the total budget available and the NPV associated with other projects on which the budget could be spent. A project for which the estimated NPV is negative is not economically acceptable. The negative NPV indicates that there are better uses for the resources involved in the project, i.e., given their opportunity costs and timing and the discount rate, they could be used elsewhere to produce more consumption benefits in present value terms. Table 6.1. Net present value - Philippine project (5 percent discount rate; value in constant pesos). a Item in row 1 divided by (1.05)n for years 1 to 15. b The sum of items in row 2. Table 6.2. Economic rate of return (ERR) - Philippine project. a i.e., sum of the present values of net benefits (costs) discounted at 32 percent per year. In the previous example of NPV calculation, the NPV was P29,310 when a 5 percent discount rate was used. The question could be asked: What rate of discount would have to be used to obtain a NPV of zero, i.e., what is the implied discount rate that would make the PV of project benefits equal the PV of project costs? That rate is called the internal rate of return, or the IRR. It is essentially a breakeven discount rate in the sense that the PV of benefits equals the PV of costs. The IRR concept is used both in financial and economic efficiency analysis to produce either an internal financial rate of return (FRR), or an economic rate of return (ERR). The IRR is a commonly used measure in financial analysis. It is comparable to the ERR in terms of derivation, although it means something slightly different. The FRR shows the investor what the average earning power is associated with a given investment of his funds. More specifically, it is the average rate of return on the invested funds outstanding per period while they are invested in the project, or that rate of interest which makes the NPV (using market prices) equal to zero. Thus, a FRR of 10 percent indicates to the investor that s/he will receive $0.10 back per year for each $1 invested during the years in which the investment is left in the project. This is a useful measure for an investor, since it provides a clear means for comparing alternative uses of funds. Say that the investor's best use of funds, other than putting them in the project, is to put them in the bank at 6 percent interest per year. The investor compares the rate of return on the project (10 percent) with the rate of return from the bank (6 percent), which is called the alternative rate of return (ARR), or the investor's opportunity cost of capital,[26] and s/he then knows that the project use will give a greater return than the best alternative use.[27] The ERR is similarly interpreted, except it shows the decisionmaker what society can expect to receive back in consumption benefits for a given investment of its scarce resources. In other words, if the calculated ERR is 10 percent, this tells the decisionmaker that the average annual return of consumption benefits on resources outstanding per period while they are invested in the project will be $10 for every $100 of resources invested and left in the project. The ERR will be compared with the consumption rate of interest to see if the project earns enough to make it worthwhile to invest (forego consumption now in favor of future consumption). Say that the relevant consumption rate of interest is 5 percent. This means that society wants to get at least a 5 percent rate of return on investment of its resources to make it worthwhile to forego present consumption in favor of investment and future consumption. If the ERR turns out to be 10 percent for a given project, this means that, on the average, society will get more than the minimum acceptable 5 percent back. Thus, the project is economically efficient in terms of its use of scarce inputs assuming that the other two conditions for economic efficiency are met. The Philippine example is used to show how the ERR is calculated. The undiscounted net benefit (cost) items for each year are shown in row 1 of table 6.2. By discounting these by 32 percent the PV figures as shown in row 2 are obtained. If these values are totalled, the NPV is zero which by the definition occurs when the economic rate of return is used to discount all net benefits (costs). Thus, 32 percent is the ERR. The calculation to find the ERR or the interest rate which makes NPV equal to zero has to be by trial and error.[28] Annex 6.2 provides details on how to calculate the ERR. What does the ERR of 32 percent indicate in the Philippine example? It represents the yield of the resources used in the project over the project period. It means that $1 invested in the project will generate $0.32 per year for every year that the $1 remains committed to the project. It also indicates that this return is greater than the assumed consumption rate of interest of 5 percent, which measures the trade-off between consumption in a given year t0 and consumption delayed until the following year, t1.[29] Society should be interested in leaving its resources in a project such as this rather than consuming them now because it will receive more back in the future than is needed to satisfy its perceived trade-off between present and future consumption. Just because a project has an ERR that exceeds its consumption rate of interest, this does not automatically mean that the project will be accepted and implemented. It does mean that the project represents an efficient use of resources, given acceptance of the consumption rate of interest as being the relevant one.[30] However, there is always the possibility that other uses of a limited budget can provide higher rates or return than the project being studied. The above two measures can be used to answer the economic efficiency question as it relates to both project components and entire projects. When NPV is used, the usual approach - as discussed in chapters 2 and 3 - is to analyze components first, making sure that all separable components ending up as part of a project package have NPVs at least equal to zero. Once a set of economically efficient project components has been assembled into a project, then exactly the same approach can be used in calculating the NPV or ERR for the total project. As mentioned, the least cost condition for economic efficiency does not involve calculation of a NPV or an ERR. Rather, the costs of alternatives are compared directly to find the least cost alternative. Some analysts prefer to treat the costs avoided by undertaking the project instead of the least cost known alternative as the benefits of the project alternative being analyzed. These benefits are then used in calculating a NPV for the project alternative being analyzed. If it is positive, then this shows that it is the least cost alternative among the known set of alternatives. If the NPV is zero, then the least cost alternative to the project has costs exactly the same as the project being analyzed. If the NPV is negative, then the alternative to the project being analyzed has lower costs. While there is nothing conceptually wrong with this approach, it can become confusing; thus, it is recommended that costs of alternatives are compared directly. (Confusion can arise in cases where the project has to be compared with other entirely different projects which are competing for the same budget. In point of fact, the costs avoided by undertaking one alternative rather than another to achieve a given output do not necessarily represent a true measure of benefits.) NPV and the ERR represent alternative means of presenting the relationship between costs and benefits. In mathematical terms the relationship between the two is as follows: Net present value = Economic rate or return is that discount rate ERR such that where Bt = benefits in each year t Ct = costs in each year t n = number of years to end of project i = discount rate or consumption rate of interest (CRI) ERR = the internal economic rate of return From these definitions, the following relationship holds: When NPV = zero, then the ERR = i, or the consumption rate of interest (or the discount rate used in calculating the NPV). Given the definitions and the above relationship between the two measures, what can be said about the information provided by each of them in terms of the three conditions for economic efficiency mentioned in chapter 2.[31] Neither of the two measures of project worth tell anything about the least cost (or third) condition for economic efficiency. This condition has to be studied in a separate analysis undertaken in the design and preparation stages of the project. Both measures do provide information related to whether PV of benefits are less than, equal to, or greater than the PV of costs for a project component and the total project. In point of fact, they both provide exactly the same answer to the question of whether or not a project or project component is economically efficient in terms of these first two conditions. If a project is accepted as being efficient in terms of one measure (i.e., NPV ³ 0), it will also be acceptable in terms of the other measure (i.e., ERR ³ CRI) and vice versa. So far in the discussion, it can be seen that either of the two measures could be used equally well to determine whether a project is economically efficient (assuming no lower cost means to achieve the project objectives is known to exist). Thus, the choice of which of the two to calculate and use is unimportant in terms of this basic question, although the analyst obviously has to calculate the measure commonly used by the institution for which s/he is carrying out the analysis. Each of the two measures provides additional information that the other does not provide. The NPV measure, in contrast to the ERR, provides information on the absolute value or magnitude of the present value of net benefits of a project. Yet it tells nothing about how large the cost will be to achieve the NPV. Thus, there could be a project with a NPV of $1,000 which costs $2 million or one with the same NPV that costs $5,000. Both would have the same NPV. On the other hand, the ERR is a relative measure of project worth, which gives information on the returns per unit of cost and thus provides more relevant information for comparing the benefits which can be expected from alternative uses of a limited budget. Therefore, it is more useful for ranking independent project alternatives when it is not possible for budget or other reasons to undertake all projects that meet the basic economic efficiency conditions. Table 6.3 summarizes the differences between ERR and NPV measures. Table 6.3. Summary of the measures of investment worth.* * There is some disagreement about the relative merits and applications of the criteria. The recommendations presented here represent the authors' viewpoints. ** Any measure of investment worth only provides one source of input into the final decisionmaking process. Other factors that should be evaluated include a sensitivity analysis, personal preferences, and distribution of costs and benefits throughout the life of the investment. *** Alternative rate of return. As mentioned in the text, using the value flow table as a basis for NPV and ERR calculations, the analyst avoids the need for discounting and compounding formulas other than the simple present value formula. However, there are occasions where the analyst may find it convenient to use other formulas, all derived from the basic one, which permits s/he to calculate in one step the present values for equal annual or periodic series of payments or to obtain an annual equivalent for a present or future value (e.g., where s/he wants to calculate a rental equivalent for a purchase price). 1. Calculating the present value of a periodic series of equal payments Table A6.1 summarizes the main formulas needed to calculate the present and future values of annual and periodic payments (costs or benefits). The PV derived by using these formulas is expressed in terms of one year (period) prior to the year (period) when the first payment occurs. Thus, the analyst has to make sure that s/he appropriately compounds or discounts the result if s/he wants PV expressed in terms of a different year (period). Application of the formulas is illustrated below. PV of equal annual payments Assume a situation where there is an annual maintenance fee of $12 for a plantation which starts at the beginning of year 2 (the third year) of the project and continues up to and includes year 15. Thus, there are (15-2) + 1, or 14 equal payments of $12. How would the PV of this series of payments be calculated, if the discount rate is 8 percent? First, applying the appropriate formula from table A6.1 (formula 1 for a finite number of payments) the following result is obtained: This gives the PV in year 1 of the 14 payments starting in year 2. Second, discounting this value ($99) back one more year ($99/[1.08]) the PV in year zero is $91.60. This formula might be useful if, for example, the analyst wanted to compare the present value of two alternative equal annual cost streams. Assume that two alternative plantation management schemes were possible, one involving four equal costs of $30/ha for years 1 to 4 and another involving ten equal costs of $10/ha for years 2 to 11. The PV in year zero for the first alternative would be (using 8 percent): (This is already in year zero terms since payments start in year 1.) For the second alternative, the PV in year zero would be: year 1 year zero Thus, the analyst can see that in PV terms the second alternative provides the lowest cost, assuming that the relevant discount rate is 8 percent. Present value of a series of equal periodic payments If payments (costs or benefits) occur every t years instead of every year for a specified period of time, then formulas 5 and 6 in table A6.1 to obtain PVs can be used. For example, suppose there is a situation where fertilizer will be applied to a stand every five years, starting five years from now and lasting during the entire rotation of fifty years except for year fifty. This means that there would be nine equal applications starting in year five and ending in year 45. Assume that the cost each time is estimated to be $20/ha. How would the PV of these payments to estimated? Looking at table A6.1, formula 5 would be used for a finite number of periodic payments. The PV would be calculated as follows, assuming a discount rate of 8 percent, t = 5, and N = 9: If there were also an application of fertilizer at the time of establishment, that amount would have to be added to the PV obtained above. The most common use in forestry of formulas for calculating the PV of series of equal periodic payments is in calculation of the SEV. This is explained and illustrated below. Soil expectation value. The SEV gives an estimate of the present value of land if it were put into forestry and produced an infinite number of net returns of $R every r years (where r is the rotation length). To estimate the SEV, the net benefit of forestry production at the end of the first rotation R is calculated, without taking actual land cost into account and then the NPV of a future periodic series of net benefits of $R is computed beginning with $R received at the end of the first rotation. Thus, for example, assume a situation for a plantation as follows: The compounded value of the establishment cost at the end of the first rotation (year 11) is: The compounded value in year ten of the ten equal annual costs ($10 each year between years 1 and 10, both inclusive) can be calculated by using formula 2, table A6.1: which must be compounded for one additional year: Therefore, total costs at the end of the first rotation (year 11) are $583 + 157 = $740 and net benefits at rotation age are $1,000 - $740 = $260. The present value of an infinite series of payments of $260 received every 11 years, or the SEV of this forestry management alternative, can be calculated by using formula 5 in table A6.1, for an infinite number of periods: What does this SEV of $195 mean? It has several meanings. Most commonly in forestry it is used to determine what amount could be paid for the land to breakeven, i.e., have PV of costs equal PV of benefits, using a discount rate i (in this case 8 percent). More generally it indicates the PV of the productive capacity of the land, given the values assumed and the assumption that the land could continue to produce timber in perpetuity at the given rate. 2. Annual equivalency formulas Formulas 3 and 4 in table A6.1 are used to calculate annual equivalents of given amounts of PV of costs or benefits. The formulas are merely the inverse of formulas 1 and 2. Assume, for example, that two alternative incentive programs for tree farmers are being compared. One alternative is to given them a lump sum today of $100. The other alternative considered is to provide them with five equal payments over five years, starting one year from now. For the latter incentive to be effective, the annual amount should equal the $100 of PV using their relevant discount rate. In this case it is assumed to be high - 30 percent - since they value present income considerably higher than future income. To find the annual payments necessary, formula 3 for a finite number of payments is applied. The annual amount that would have to be paid, starting one year from now, to make the farmers indifferent between $100 now and the five equal payments, would thus be: In other words, given their relevant discount rates (or their trade-off rates between present and future income) they would have to be paid $41 per year for five years to make them indifferent between the two payment forms. Table A6.1. Annual and periodic payment formulas. i = rate of interest (discount) in decimal form n = number of years or periods until last payment starting with 1 year from now t = number of years between periodic payments Although several relatively inexpensive hand calculators contain programs (or can be programmed) for rate of return calculations, the analyst might be faced with situations in which the computation of ERR would have to be based on more rudimentary methods. There is no formula for calculating the ERR when more than one cost and/or benefit is involved. Therefore, a trial and error technique has to be used. The approach is as follows: 1. First, calculate a NPV using a rate which is estimated to be in the neighborhood of the expected ERR. If the NPV is negative, then the ERR must be lower than the rate of discount used. If the NPV is positive, then the ERR must be higher than the discount rate adopted. 2. If the first NPV calculated is negative, then reduce the discount rate up to a point where the calculated NPV is positive and vice versa if the first NPV calculated is positive. The ERR must now lie between the two rates of discount used in generating the positive and negative values of NPV. 3. Estimate the ERR by using the following formula: 4. Repeat steps (1) - (3) for a more precise result, if needed. The following example, which uses the figures of the Philippine tree farm project, illustrates the use of this technique: Table B6.1 shows in row 1 the net benefits (costs) of the Philippine tree farm project (from table 6.1). The second row contains the PV of each annual flow discounted at 20 percent. The NPV, using this discount rate is positive and equal to P 4,638 and, therefore, the ERR must be higher than 20 percent. A further discounting attempt at 30 percent generated a still positive NPV equal to P 453. Therefore, a still higher discount rate of 35 percent was tried, which rendered a negative NPV of -P 543. The ERR must then lie between 30 and 35 percent. Using the formula from step (3) above, the ERR of this project is estimated as follows: This is rounded off to 32 percent. A further interpolation using a narrower range of 31 and 33 percent would have produced NPVs equal to P 215.6 and -P 198.5, respectively. Using these two new values, a second estimate of ERR would be 32.04 percent. But since the result is being rounded off to the nearest whole percentage point, this additional refinement is unnecessary. Table B6.1. Calculating the ERR - Philippine project. a From table 6.1.
http://www.fao.org/3/t0718e/t0718e07.htm
CC-MAIN-2020-16
refinedweb
6,667
58.42
strings describing the connected joysticks. This can be useful in user input configuration screens - this way, instead of showing labels like "Joystick 1", you can show more meaningful names like "Logitech WingMan". To read values from different joysticks, you need to assign respective axes for the number of joysticks you want to support in the Input Manager. The position of a joystick in this array corresponds to the joystick number, i.e. the name in position 0 of this array is for the joystick that feeds data into 'Joystick 1' in the Input Manager, the name in position 1 corresponds to 'Joystick 2', and so on. Note that some entries in the array may be blank if no device is connected for that joystick number. // Prints a joystick name if movement is detected. function Update () { // requires you to set up axes "Joy0X" - "Joy3X" and "Joy0Y" - "Joy3Y" in the Input Manger for (var i : int = 0; i < 4; i++) { if (Mathf.Abs(Input.GetAxis("Joy"+i+"X")) > 0.2 || Mathf.Abs(Input.GetAxis("Joy"+i+"Y")) > 0.2) Debug.Log (Input.GetJoystickNames()[i]+" is moved"); } } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Update() { int i = 0; while (i < 4) { if (Mathf.Abs(Input.GetAxis("Joy" + i + "X")) > 0.2F || Mathf.Abs(Input.GetAxis("Joy" + i + "Y")) > 0.2F) Debug.Log(Input.GetJoystickNames()[i] + " is moved"); i++; } } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.3/Documentation/ScriptReference/Input.GetJoystickNames.html
CC-MAIN-2021-31
refinedweb
239
56.55
go to bug id or search bugs for New/Additional Comment: Description: ------------ Hello! I am trying to install stats. When I restart Apache I get the error: PHP Warning: Unknown(): Invalid library (maybe not a PHP library) 'stats.so' in Unknown on line 0 I have tried to install with "pear install stats". Then I tried to install manually from CVS (phpize et al.). The error remains the same. I have tried both dl('stats.so') or extension=stats.so in php.ini. The error is either displayed or written in the apache error log. Some info (could be irrelevant): gcc -v gcc version 3.4.4 (Gentoo 3.4.4-r1, ssp-3.4.4-1.0, pie-8.7.8) Reproduce code: --------------- pear install stats echo "<?php dl('stats.so'); ?>" |php Expected result: ---------------- nothing Actual result: -------------- Warning: dl(): Invalid library (maybe not a PHP library) 'stats.so' in - on line 1 Add a Patch Add a Pull Request windows builds from snaps.php.net have the same problem I have same problem. Linux 2.6.14-gentoo-r5 on x86 Using PHP-5.0.5 and tried too PHP-5.1.2 doesn't make a differens. PHP Warning: PHP Startup: Invalid library (maybe not a PHP library) 'stats.so' in Unknown on line 0 Here's the steps to fix the problem: 1. run phpize 2. edit generated Makefile 3. add -DCOMPILE_DL_STATISTICS to CPPGLAFS 4. make, make install Or else 1. open statistics.c 2. remove the #ifdef and #endif in line 136-138 around ZEND_GET_MODULE(stats) Or else, maybe, compile it staticly with php :D Hi, can you test changing COMPILE_DL_STATISTICS to COMPILE_DL_STATS in statistics.c and try to compile it again as a module without touching anything else? Thanks, Andrey Hi, I have changed COMPILE_DL_STATISTICS to COMPILE_DL_STATS in statistics.c, then did make clean, make, and install the extension, and ran the standard deviation test, result is: ------------------------------------------------ string(11) "2.449489743" Warning: stats_standard_deviation() [function.stats-standard-deviation]: The array has zero elements in /var/www/sd.php on line 5 bool(false) float(2.1213203435596) ---Variance--- float(3.6) float(4.5) ------------------------------------------------ So I guess it worked. Same problem i get with php_stats.dll i am trying it on pws Do you have fresh dll? I think this was fixed few weeks ago, at least the Unix platforms, but the problem probably was caused by the same root. I had used the latest but nothing happened. I assume this issue has already been resolved, hasn't it? Oh yes, a very long time ago! The computer I mention in the bug has been broken since, and I am still using the math-stats library in every installation. Thanks for the fast reply. Closing.
https://bugs.php.net/bug.php?id=56875&edit=1
CC-MAIN-2021-10
refinedweb
458
70.29
Details Description We should use approximations in order to deal with queries/filters that have slow iterators such as doc-values based queries/filters. Activity - All - Work Log - History - Activity - Transitions Same file with the conventional name. +1 For the deprecations, can we still remove the code and implement the deprecated ones with QWF(Query)? E.g. is it possible to do it like TermFilter where we just do: @Deprecated public class FieldValueFilter extends QueryWrapperFilter This way we don't have to really maintain the code to these old ones. One thing I'm concerned about if we do that is that these filters will not expose random-access anymore, which could break some applications? OK I agree, lets just deprecate them as-is for now. On a followup issue, maybe we can allow a similar api to be exposed on query/weight/scorer, so that booleanquery can do the optimizations filteredquery and booleanfilter are doing (any optimizations that really help and not hurt). If we did this, then I think we could remove the duplicate impls. Commit 1661156 from Adrien Grand in branch 'dev/trunk' [ ] LUCENE-6268: Replace FieldValueFilter and DocValuesRangeFilter with equivalent queries that support approximations. Commit 1661167 from Adrien Grand in branch 'dev/branches/branch_5x' [ ] LUCENE-6268: Replace FieldValueFilter and DocValuesRangeFilter with equivalent queries that support approximations. Bulk close after 5.1 release Here is a patch, it replaces: These new queries support two-phase iterators with an approximation which matches all documents between 0 and maxDoc-1. The new queries do not have the "docsWithField instanceof BitSet" optimization anymore since the 5.0 doc-values format does not use bit sets for any of its docWithField implementations. On 5.x we could just deprecate these filters.
https://issues.apache.org/jira/browse/LUCENE-6268
CC-MAIN-2017-04
refinedweb
290
53.31
I was trying to read a changing file in Python, where a script can process newly appended lines. I have the script below which prints out the lines in a file and does not terminate. with open('tmp.txt','r') as f: while True: for line in f: print(line.replace('\n','')) a d 2 3 echo "hi" >> tmp.txt Looking at the objects f in python 2.7 vs 3.5 they are slightly different The following with open('tmp.txt','r') as f: print(f) print(type(f)) In python 2.7 returns <open file 'tmp.txt', mode 'r' at 0x0000000003DD9780> <type 'file'> Whereas in python 3.5 returns <_io.TextIOWrapper <class '_io.TextIOWrapper'> The same behavior can be obtained in python 2.7 using import io with io.open('tmp.txt','r') as f: while True: for line in f: print(line.replace('\n',''))
https://codedump.io/share/DDdD3rZZHMo2/1/reading-changing-file-in-python-3-and-python-2
CC-MAIN-2017-13
refinedweb
147
80.38
10 September 2012 04:32 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The acetone-to-IPA facility was originally due to come on stream in early November but the start-up schedule has been delayed, possibly by one to one-and-a-half months because of an “unexpected factor”, the source said without giving further details. LG Chem, the largest IPA producer in IPA prices in southeast Asia have been going up sharply amid tight supply on the back of several plant shutdowns but the situation is expected to improve by the second half of October as two regional facilities would have completed their maintenance by then. JX Nippon Oil & Energy plans to resume operation at its 85,000 tonne/year plant at “We are worried that prices will fall once these plants return to normal production as demand in Spot prices rose by $15/tonne (€11.7/tonne) to $1,355-1,380/tonne CFR (cost and freight) SE (southeast) (
http://www.icis.com/Articles/2012/09/10/9593950/s-koreas-lg-chem-may-delay-start-up-of-new-ipa-plant-to-end-2012.html
CC-MAIN-2015-22
refinedweb
161
50.91
Currently it always loses GL context when paused, but it's not necessary to do so in all cases. Some gpus will support multiple EGL contexts. I'm part of the MonoGame team and have implemented reloading of textures on resume. This works but it's still a very poor user experience. The larger the game the longer it takes to resume and even showing a fullscreen ad causes the context to be lost and forces a long resume. As far as I can tell, AndroidGameView plays a similar role to Android's built in GLSurfaceView. This class has the same problem. Newer SDKs offer 'setPreserveEGLContextOnPause' but people have fixed (or worked around) it in older SDKs by customising this class: Can someone from Xamarin please take a look at whether doing something like this would be an option for AndroidGameView? Currently it seems that we are stuck with the reloading textures even on the latest SDKs. Aranda Perhaps we could look at handling our own surface and context lifetime for a future release? We have been given permission to take whatever we want from ExEn. But AndroidGameView should handle it correctly as well. Hi Aranda OK, I've done some rework on the AndroidGameView, one of the issues was that when ever the surface was destroyed the context was as well. I've reworked the code a bit so that it only destroys the eglSurface when the SurfaceDestroyed (ISurfaceHolder holder) callback is called. It will now hang on to the Context Object. Now when we then get a surfaceCreate we create a new surface and then attempt to use the previous context, if we get a ContextLost error we create a new one. I am also adding a OnContextLost virtual method as well as a ContextLost event to the game view so people can hook into when this happens. I have your email so I'll sent you a test package , let me know how it goes Dean Answers Just wanted to bump this thread and also point out a couple of things: Many games handle the switch to OS and back to the game without pause to reload textures (just try Angry Birds). Be a shame if AndroidGameView can't do it. ExEn framework manages this too. Instead of AndroidGameView it's handling it's own surface and context lifetime using the Javax.Microedition.Khronos.Egl namespace. The crux being that when the surface is destroyed the context is not necessarily lost. I've had a look at doing this but it's not obvious where to start. ExEn's implementation uses Javax.Microedition.Khronos.Egl, which only has EGL1. I can't find an equivalent for EGl2 and it's not clear how to use OpenTK's bindings in a similar way. Thankfully Dean has said he'll investigate the internals of AndroidGameView. Hi Dean, Thanks for taking a look. I'll gladly test out any packages you want to send through. Excellent work, @Dean. @dellis has now sent me through two samples (with old and new MfA code) and I've confirmed that he's fixed the issue on my Galaxy S2. Excellent work indeed! Hi Here are the links to the 2 test applications. 1) The old version which should demonstrate the issue. After pressing the Home button, the textures will be white. 2) The new version with the fix Can I ask if you can test the apps and report back the results please (along with some device info like device, os version). I only have 2 android devices and would like to get this tested on as many as possible to ensure I haven't broken anything. Dean All devices tested have behaved as expected for both apks (old one shows white cube after resume and new one has correct texture): Samsung Galaxy S2, Android 4.0.4 HTC Desire, Android 2.3.7 Works for me on Samsung Galaxy S, Android 2.3.6 Kindle Fire 7" HD, Android 4.0 Samsung Galaxy Player 5 YP-G70 - does not demonstrate this bug using the control APK. This device is running 2.3.6. Correction. If I leave the device in portrait and go to the lock screen and then return, the textures are fine. If I launch, then move to landscape, then press the power button to enter the lock screen, then resume, that's when I see the white cube (fail condition). Using the "new" APK, start the app in portrait, switch to landscape (have to interact with the scene to switch orientation), click power button, return to portrait and unlock screen, in landscape mode the cube appears with textures. Acer Iconia A500 with lock sceen set to swipe. Does not demonstrate the fail condition using the control APK. This device has a portrait and landscape mode lock screen. This device is running 4.0.3. Samsung Galaxy Tab 2 GT-P3113 updated today to 4.0.4 does not demonstrate the failure condition using the control APK. This device has a portrait and landscape mode lock screen. Note the "new" APK also works fine on this device. SONY XPERIA LT26i running 4.0.4 does not demonstrate the failure using the control APK. I tried several times in landscape mode for the app, then switch to portrait mode to unlock and back to landscape, never did the textures disappear to white. This device does NOT have a dual orientation lock screen. @jacobanderson bizarre that none of your devices show the problem. Just to be sure, are you pressing Home button to reproduce this? I don't think just locking the device or changing orientation will exhibit the problem. Nexus 7 (Android 4.2.1): Both APKs work as expected. Nexus S (Android 4.1.2): Both APKs work as expected. hey Aranda. If I just press the home button then both APKs work well. Right. So it would seem you have some nice devices that manage to not lose context most of the time. I also just read your correction about the Samsung Galaxy Player 5 YP-G70, so you can reproduce the issue, and fix, on at least one device. It seems pretty conclusive so far. Looks like you fixed it Dean... great work! Now to get it integrated with MonoGame Bump! Just wondering if there's been any movement? Seems like @dellis had a fix, so it'd be great if Xamarin could give some priority to this issue. Anyone? Bueller? This is a pretty severe shortcoming of Xamarin (and apps built using it). Someone not using Xamarin can just use GLSurfaceView.setPreserveEGLContextOnPause, but Xamarin encourages/forces apps to use AndroidGameView, but that doesn't derive from GLSurfaceView so we can't use setPreserveEGLContextOnPause. @MikeDussault.8139 agreed. It's nearly been a year since I reported this problem. @dellis are you able to raise it internally again? If fixing AndroidGameView is viewed as a potentially breaking change, perhaps Xamarin could provide a sample showing how to roll our own game view and use GLSurfaceView.setPreserveEGLContextOnPause? @Aranda I have moved teams since this was looked at, I passed on my findings and code to the OpenTK maintainer within Xamarin to get it into a release. There is some work going on to update the mono/opentk stuff and I believe the android changes including the AndoridGameView are going to be open sourced (see). I don't have any time scales on this, I will bump internally again and see what happens. Hello, any updates on this issue? I can't port any game from iOS to Android without tons of work due to this problem. I just got an update from Bugzilla which says this is now fixed and may be in the next alpha. Yay! Hello, I would like to know if there is any chance that this fix lands on a public channel anytime soon? In some situation, it can be quite severe. It affects anything that switch to new activities, like: We are getting a lot of bad reviews on the Play Store with comments about this issue and a much lower conversion rate compared to other platforms. Could we please have an estimated time of deployment? It would greatly help us to know how to proceed with our upcoming game due next week. Thanks for the support. Has anything been done about this recently? I'm using Monogame to develop an Android game and there's no way I could release the game in this state - switching between apps and returning to the game takes several seconds which is an extremely poor experience for the user. I would delete the game from my phone immediately if I experienced this. I fail to see how lots of other Android games manage this sort of switching and returning to the game instantly and we can't. Any update as to how to work around the problem or when this supposed fix is going to be available will be greatly appreciated! @DeanEllis can you please follow this up internally again? There was a post in the associated Bugzilla saying it was fixed (), but I've not had any reports of it appearing in a beta build yet. @Aranda its in master the next release of Xamarin.Android from that branch will probably be june Thanks Dean. As a side note, when this is 'released' how would we actually get the fix? What/how would we download to get it? Many thanks for the follow up. @IanWeeks In this case the OpenTK.dll is included in the updates for the Xamarin.Android product. If you have Xamarin.Android installed you should get update notifications in both Xamarin Studio and Visual Studio of new versions. I'll try to remember to post on this thread exactly which release will contain this fix so you can all download it Looks like the fix made it into Xamarin.Android 4.12.3. It will require code changes in MonoGame though since there are new events raised for when the context is lost/set. Ok, awesome Dean. Do you know when that version was released? So if we start using the new events in MonoGame, this will mean users of older Xamarin.Android versions will not compile right? yes, for me this is the worst issue with monogame, it makes android app port almost useless... Hope it will be fixed soon... @DeanEllis - I downloaded the latest version of Xamarin (4.12.3) but nothing seems to have changed. My version of OpenTK.dll is still 0.9.9.3 - that never got updated. What do I need to do to implement this fix? What are these new events that you mentioned when the context is lost/set - what coding do we need to implement - any examples? I'm just very frustrated about the massive pauses that occur when you switch between applications or go back to the home screen and then try and return to the game - several seconds of blank screen - this is still occurring after downloading 4.12.3. Please help! Thanks, Ian. Hi @IanWeeks I'll knock up a simple demo app for you today showing how to use the new OnContextSet OnContextLost methods. If you are using MonoGame this information will need to be translated over to what ever they do. @Aranda yes if these new events are used people on older versions of the libraries will not be able to compile. @IanWeeks also unhelpfully the version number did not change it is still 0.9.3.3. These new methods/events are available in both the 0.9.3.3 and the 1.0.0.0 versions of OpenTK (i.e OpenTK.dll and OpenTK-1.0.dll) OK, thanks @DeanEllis - looking forward to your simple demo app to show us how to use these methods/events ...
http://forums.xamarin.com/discussion/comment/2418/
CC-MAIN-2016-44
refinedweb
1,979
73.98
>>.'" So? (Score:2, Insightful) Re:Perfect example (Score:4, Insightful) There's no IP. There is copyright, patents and trademarks. This sounds like a trademark thing, so no need to confuse the issue. Re:Non-issue (Score:3, Insightful) Re:So? (Score:5, Insightful) Some things are ethically questionable even when there is no legal problem involved. A concept often forgotten in the corporate world. They should plan better (Score:2, Insightful) Google simply does not care. (Score:2, Insightful) Re:They should plan better (Score:5, Insightful) As someone stated before, this is not a legal issue. It's just about basic politeness. They should change it... (Score:3, Insightful) Re:Go! (Score:3, Insightful) I don't know if there's a Poet Laureate position for Slashdot, but either way I nominate this guy. Brilliant!. Re:I said it yesterday, but... (Score:3, Insightful) Re:Hmmm... :Slashdot needs a voting mecahnism for this (Score:2, Insightful) A poll would be interesting. Personally, I think that "Go and "Go! are two different names, so there is no problem. Unless you get excited about the first one... Re:Go! (Score:3, Insightful) Re:Go! (Score:1, Insightful) That little light on your dashboard? That's your "broken sarcasm detector" indicator light. You should get that checked out. Re:Go! (Score:1, Insightful) It's worse than that. You'd think Google would have a comprehensive understanding of the value of picking a term that would make web searches easier. "Go" is rather a common word. There's the game, the other programming language, and it's everyday uses. Talk about namespace collision! Maybe they should have named it "GoTwo"? :-) Re:Hmmm... :Tingo? (Score:1, Insightful) I have recommended gingo (gingo is not go). Re:How come they didnt google "Go" lol (Score:3, Insightful) Because Googling for "go" gets you 2,950,000,000 hits. Yes, that's billions. And yet they didn't see that choosing such a common word for a language name was a bad idea. Ah, how the mighty goof up. Re:Go! (Score:3, Insightful) So what? (Score:3, Insightful) "From what I've read, Go! was pretty much unknown to anyone outside a very small group 2 years ago." From what I've read, Go was pretty much unknown outside of Google until about a week:So? (Score:3, Insightful) "Like reusing the name of an obscure project that seemingly died years ago and nobody here has even heard of?" Right. If Slashdotters haven't heard of it, there's no ethical issue. Re thinking about offering it. I saw mention on a tv special about google over a year ago that they were working on a language with short compile times. So unless you have something better than nu uh to reply with save the text. I won't be feeding the trolls.
https://developers.slashdot.org/story/09/11/12/1256234/Google-Under-Fire-For-Calling-Their-Language-Go/insightful-comments
CC-MAIN-2016-40
refinedweb
480
78.35
Axel Naumann on everyone agrees that C++ needs a facility to query C++ code itself: types, functions, data members etc. And that this facility should be a compile time facility, at least as a start. But what should it look like? Several proposals were on the table over the last few years that SG 7 existed; in Jacksonville those were N4428, P0194 and P0255. Here are the main distinguishing features, and SG7's recommendation: How to get reflection data Two major paths to query an entity (a base-level construct) were proposed: operators or templates. Templates need to obey the one-definition-rule (ODR); any recurrence must be exactly the same as the previous "invocations". They do not allow to test for "progress" within a translation unit: do we have a definition? Do we have a definition now? And now? For template-based reflection, the answer must always be the same. But even more importantly, C++ only allows certain kinds of identifiers to be passed as templates arguments. Namespaces, for instance, are not among them. There must be no visible difference between passing a typedef or its underlying type as a template parameter, making it impossible to reflect namespaces or typedefs, or requiring language changes for the sake of reflection. Operators, on the other hand, are a natural way to extend the language. They do not suffer from any such limitation. Additionally, they signal clearly that the code is reflected, making code review simpler. Traits versus aggregates How should reflection data be served? Some proposals were based on structure-like entities. Code could use members on them to drill into the reflection data. This meant that the compiler needs to generate these types for each access. The objects could be passed around, they would need to have associated storage, at least at compile-time. The alternative is an extension of the traits system. Here, the compiler needs to generate only data that is actually queried. It was also deemed simpler to extend, once reflection wants to support a more complete feature set, or once reflection wants to cover new language features. Traits on meta or traits on code? These traits can be applied on the C++ code itself, as done for the regular C++ type traits, possibly with filters to specify query details. Or, and that is the main distinguishing feature of P0194, an operator can "lift" you onto the meta-level, and reflection traits operate only on that meta level. P0194 Meta-objects are of a meta-type that describes the available interfaces (meta-functions). All of that can be mapped into regular C++ these days, with some definition of “these days": meta-objects are types; they are unnamed and cannot be constructed; they are generated by the reflection operator, for instance reflexpr(std::string). Meta-functions are templates that "take" a meta-object and "return" a constexpr value or a different meta-object, for instance get_scope. And the big step for the Jacksonville-revision P0194R0 of the proposal has happened for the meta-types: they are now mapped to C++ concepts! That is obvious, natural and makes the proposal even simpler and even more beautiful. Reflection-types described by concepts You can query for instance the type property of a meta-object, using get_type. But not all meta-objects have a type; it would not make sense to call that on the meta-object of a namespace. The meta-object (remember, a type) must be of a certain kind: it must implement the requirements of the meta::Typed concept. The type returned by reflexpr(std) does not satisfy these requirements. Easy. For each meta-type (concept) there exists a test whether a meta-object (that all satisfy the meta::Object concept, by definition) is of that meta-type, i.e. satisfies the concept. For instance, get_type is only valid on those meta-objects for which has_typed_v<meta::Object> is true. Reflection language versus Reflection library P0194 proposes the basic ingredients to query reflection in C++. You might find it too basic or too complex. We use it to lay the first few miles of the train track, to agree on the design and specify the “language" used. Once we have that, extending it to become a full C++ reflection library is much simpler than providing a complete feature set and defending the design against ten other proposals in parallel. Matus, the original author, has already shown that P0194 is extensible. Like mad. And now? Jacksonville was a big step: SG7 agrees on the recommended design. Now we need to agree on the content. For instance, should reflection distinguish typedefs and their underlying type? Take struct ArrayRef { using index_type = size_t; using rank_type = size_t; rank_type rank_; }; Should reflection see the type of rank_ being unsigned long or rank_type? The former is how the compiler understands the code (“semantic” reflection), the latter is what the developer wrote (“syntactic” reflection). We are collecting arguments; I know of lots of smart people with convincing arguments for each one of these options. Matus is currently writing the next revision. He will split the paper: a short one with the wording, and a discussion paper that explains the design decisions of SG7 - a sort of log, collecting the arguments for those who want to know why C++ reflection ends up the way P0194 proposes. The design paper will also contain examples of use cases, for instance a JSON serializer and likely a hash generator. Can you implement you favorite reflection use-case with P0194’s interfaces? Cheers, Axel. - Discuss on Reddit, comment here. Submitted by Mikhail (not verified) on Tue, 04/19/2016 - 19:37 Permalink Matus has a patch for clang. Matus has a patch for clang. Is it going to be applied? If yes, when? Submitted by Matus Chochlik (not verified) on Tue, 04/19/2016 - 20:54 Permalink clang patch Mikhail, short answer; this version very probably won't. It was slapped together very quickly and it has several shortcomings. If nobody else picks this up, the plan is that I'll probably start writing a new implementation from scratch during the summer. Submitted by Mikhail (not verified) on Wed, 08/17/2016 - 18:50 Permalink Any news? Hello Matus and Axel! Do you guys have any news to share about reflection? How is new implementation going, is it started? Thanks! Submitted by Axel Naumann on Thu, 11/24/2016 - 20:23 Permalink Re: Any news? Hi Mikhail, Matus has an implementation / fork of clang on github that includes some of his reflection library on top of the proposal, basically to test-drive the proposal. Within the committee, the proposal is progressing: it will likely be discussed in the library evolution group in Kona. Cheers, Axel. Submitted by Axel Naumann on Tue, 04/19/2016 - 20:41 Permalink Re: Matus has a patch for clang. Hi Mikhail, That's here I don't think he expects this to be merged. It was meant to serve as a demonstration that the proposal is feasible implementation-wise. A reality-check. I remember Matus saying that clang should be able to do a much better (i.e. efficient) job. Cheers, Axel. Submitted by Bjarne (not verified) on Wed, 04/20/2016 - 02:19 Permalink Think of typedefs seen vs no Think of typedefs seen vs no seen as a parameter/option. Think of it as the most obvious example of "lowering" Submitted by Anonymous (not verified) on Wed, 04/20/2016 - 08:30 Permalink Is anybody keen enough to Is anybody keen enough to propose when we will see reflection in the standard? Submitted by Vyacheslav Lanovets (not verified) on Wed, 04/20/2016 - 20:50 Permalink Reflection is a much needed Reflection is a much needed functionality in C++ on my personal wishlist. Other than that I can name only default operator== (already proposed AFAIK) and enforced "override" keyword (already implemented as a warning in clang). We use C++ reflection for data persistence. We currently use circa 2003 solution based on Microsoft SBR format and SBR SDK. Needless to say, it does not work in XCode or Qt/NDK. So now there is hope we can get a standard way to reflect on struct data member names/types and list struct base classes.. Submitted by Muhammad (not verified) on Thu, 04/21/2016 - 07:39 Permalink In the ArrRef example, I In the ArrRef example, I think the type of rank_ should be returned by 2 functions one say get_type() which returns the type defined by developer I.e. rank_type, the second functions say get_underlying_type() should return unsigned long as understood by the compiler. Submitted by Axel Naumann on Thu, 04/21/2016 - 12:22 Permalink Re: the ArrRef example Hi Muhammad, I think that's fairly close to what Bjarne suggests. The main point here is that both of you believe that it should be possible to identify rank_typewhereas others (in the committee) do not want reflection to be able to see a typedef. Intentionally. The argument I heard most often is that detecting a typedef will make something a distinct entity that C++ treats as identities. (My counter-argument so far is "yes, and?") Cheers, Axel. Submitted by Ralph Trickey (not verified) on Thu, 04/21/2016 - 16:28 Permalink Yes and I'd like to do that Yes and I'd like to do that for the same reason that I want to be able to distinguish between different types of Enums. The alternative in some cases is probably to start using Hungarian Notation and prefixing variables with the type again, please don't make me do that. :( If I'm accessing an external system which has a money type but I want to simply access and display that data, being able to distinguish between the money and just a double type would save needing some other way to distinguish the user type. That's a trivial case, but I'm sure there are others. I haven't used reflection in C++ since MFC., although it's heavily used in other languages. Ralph Submitted by Mitch (not verified) on Fri, 04/22/2016 - 02:56 Permalink If that's true, the committee If that's true, the committee seems to be making assumptions about how people would use reflection (a common issue w/ design-by-committee). Reflection has uses beyond semantic analyis (which would be the absolute minimum one would expect in a reflection API, but most certainly not the peak). It's just as likely people will NEED syntactic analysis. One very basic use case that comes to mind (assuming compile time reflection is constexpr - which it needs to be) would be implementing custom compile time errors (linting) of domain specific rules w/ reflection and static assertions at the syntactic level that INDEED treat some typedefs as distinct entities. Example: In some code bases typedefs are absolutely intended to be used as distinct entities (not just an alias) and will break if that typedef changes in in another configuration (as is often the intention, else why typedef?), hence static assertion on the syntax is just as important (if not more so, due to the domain specific knowledge often encoded in syntax). One could argue in such cases that typedef should be an actual type, however it's very common in C++ for people to typedef primitives (int/float/etc) and use them as if they're a distinct entity (writing code in ways that would break if the underlying type that entity ever changed, potentially without compilation failure (due to implicit casting - hence the need for linting)) Submitted by Matus Chochlik (not verified) on Fri, 04/22/2016 - 08:30 Permalink typedef vs. underlying type At the moments its `reflexpr(rank_type)` = Meta-Typedef vs. `get_aliased_t<reflexpr(rank_type)>` = Meta-Type. To me adding a separate operator for the second case looks like an overkill. Submitted by Garet Claborn (not verified) on Thu, 05/19/2016 - 08:15 Permalink typedefs and reflection When it comes to what's returned as the result for typedefs, and classes, I would imagine the same functionality being used on class deduction so inheritance should be a major player. If you have an object that would satisfy the usual diamond problem examples, some sort of structured result would have to be returned. something as simple as struct typeinfo{ typeid id; /* other properties... */ vector<typeinfo> nodes; }; If just returning a single type, it seems you'd have to always lean toward the front-most class unless compiler's context is clearly referencing a base/baser class/type. Otherwise you may have ambiguous types at the same level. Submitted by R. Hamilton (not verified) on Thu, 04/21/2016 - 13:58 Permalink type of rank_ Is there any particular reason one shouldn't be able to discover the typedef AND the underlying format, perhaps by a second query on the definition of the typedef? The distinction may make little difference now, but unless it never will, offering the option (along with future portability advice) should cover all concerns, unless the cost is inordinate. For example, which would best support a really universal yet lightweight serialization library? Submitted by Axel Naumann on Thu, 04/21/2016 - 15:28 Permalink Re: type of rank_ Hi, I personally agree. But playing the devil's advocate, "because we can" is not a good reason to offer a feature to the world. So what we really need are good use cases that motivate the need. That's what I was fishing for :-) Cheers, Axel. Submitted by Andrew Osman (not verified) on Thu, 04/21/2016 - 22:56 Permalink re:type of rank_ code generation Submitted by Garet Claborn (not verified) on Thu, 05/19/2016 - 08:20 Permalink re-querying types i like the idea of re-querying for underlying type. you could recursively get down to the primary types and it would be simple to handle multiple inheritance with 1D array return. Submitted by Paul Michalik (not verified) on Thu, 04/21/2016 - 19:39 Permalink Did anybody manage to read Did anybody manage to read the proposal cover to cover? Submitted by Nick Weihs (not verified) on Thu, 04/21/2016 - 23:49 Permalink Why not both? I think it is unquestionable that getting the backing type of a typedef is useful and probably what you'd want to see a significant amount of time when reflecting. I'd rather avoid libraries of meta functions that bake things down into whether or not a specific type resolves to an int or not, as an example. The other case (i.e. getting the forward type of the typedef), I believe is also useful, and something I wish that templates could do as well. I was recently working on a system to gather up the fields of various data structures and present corresponding UI to the user so they could edit the fields of those structures easily. One of the things I would have liked to do was give the data structure designers ways to annotate the fields to make things easier to edit on the user side. For example, say have an int and I want to be able to annotate a lower and upper bound so that the corresponding UI is a slider instead of a text input. I want to do something like this template <int lower, int upper> using r_bounded_int = int; r_bounded_int<0, 50> m_value_that_goes_from_0_to_50; It would be nice to be able to glean this information from the type instead of doing silly things like wrapping primitive types in classes or side loading the annotation through some other variadic template mechanism. Submitted by Peter (not verified) on Sat, 07/09/2016 - 03:18 Permalink I want it. I look through Matus's evolving proposals, and it's definitely moving in the right direction. The most recent is simple and powerful. My only concern is that it's now linked with concepts. You should definitely look at: Static reflection Rationale, design and evolution Submitted by Andrew (not verified) on Tue, 08/16/2016 - 23:49 Permalink Exported reflection information The proposal for runtime access to reflection information looks promising... but what I really want is the ability to have the compiler externalize (export) the reflection database so that external tools can easily consume it (from a standardized format). All sorts of code generators could benefit from this information greatly, and doing it externally allows for using superior tools than trying to build impossible-to-understand template/etc based C++ machinery to do it. This can be used to generate language bindings, serdes for data structures, etc. Submitted by Maik Guntermann (not verified) on Mon, 11/20/2017 - 16:53 Permalink Concepts=Contraints+Reflections .enable_if_t<C::existsIn10yrs_v> (for those who haven't seen yet - Herb's great presentation on cppcon17 about reflections) IMHO, it would be a fatal mistake to distinguish between reflections and concepts/constraints; i.e. introducing concepts in C++20 (like ISO/IEC TS 19217:2015) as a different "feature" and separating Herb's mentioned reflections/injection/generative C++ (which in combination will make concepts obsolete) could be a *fatal* show stopper since the complexity of modern C, which is increasing with exponential speed, would become that complex that even Scott would need at least 50 editions to make "Effective Very Modern C++" bug-free (IIRC there were only very few editions for "Effective C++", and we are currently at no. 11 or 12 for "Effective Modern C++"). Already today manager are wondering and concerned why productivity shrinks as of the decision to introduce modern C in their companies. The argument that concepts (itself; so in sense of syntactic sugar to improve usability without adding new features) will simplify modern C for the users and just make it _a_little_bit_ more complicated for library writers is a bad one, since every user is partially also a (library) writer and the other way round. To get an impression what they are about to add, please have a look at - just the possibility that there are three ways to define one and the same constraint [sorry to all Perl coders] makes me sick: //>; Next to: Introduction of 9 types of constraints (simple, type, comound required, nested required, etc.), option to partial order contraints, wildcards (in a new context), new syntax to define contraints of a return type *within* a function body with "->" (WT..?) Instead of making complex things even more complex, wouldn't it be now a good time to deprecate old things in order to make way for something new? At least trivial stuff; i.e. making the assumption that operator new never throws would simplify/enable dozens of move operations and probably trillions of brain cells programmers waste to think about how they can avoid a copy ;-)
https://root.cern.ch/comment/2351
CC-MAIN-2018-47
refinedweb
3,137
59.23
Proper way of resizing model in QTableView I have a QTableViewthat I'm populating with a subclass of QAbstractTableModel. The model is updated via a slot that receives the information and emits dataChanged. However, that doesn't seem to be enough for the view to show the new rows. I've tried also emitting layoutChangedafter dataChangedin my slot and it works, but I'm not sure if that's correct. I've read the docs and dunno whether I should use beginInsertRowsinstead. I want to know if I'm doing it correctly or there's a different way. Anyway, here's a minimal compilable example for you to try, you can just copy and paste it in a .cppand compile it. #include <QApplication> #include <QAbstractTableModel> #include <QDateTime> #include <QVBoxLayout> #include <QTimer> #include <QDebug> class MyModel : public QAbstractTableModel { public: int rowCount(const QModelIndex &parent) const { return mRecords.count(); } int columnCount(const QModelIndex &parent) const { return 2; } QVariant data(const QModelIndex &index, int role) const { if (role != Qt::DisplayRole) return QVariant(); Record r = mRecords.at(index.row()); return index.column() == 0 ? r.date : r.value; } QVariant headerData(int section, Qt::Orientation orientation, int role) const { if (orientation != Qt::Horizontal) return QVariant(); if (role != Qt::DisplayRole) return QVariant(); if (section == 0) return QString("Date"); else if (section == 1) return QString("Number"); return QVariant(); } public slots: void addValue() { Record r = { QDateTime::currentDateTime().toString(), QString::number(qrand() % 1000) }; mRecords << r; qDebug() << "Added record" << r.date << r.value; emit dataChanged(index(mRecords.count() - 1, 0), index(mRecords.count() - 1, 1)); emit layoutChanged(); } private: struct Record { QString date; QString value; }; QList<Record> mRecords; }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QWidget w; QTableView * table = new QTableView(&w); MyModel model; table->setModel(&model); QVBoxLayout * layout = new QVBoxLayout(&w); layout->addWidget(table); QTimer t; QObject::connect(&t, &QTimer::timeout, [&] { model.addValue(); }); t.start(250); w.show(); return a.exec(); } @JoseTomasTocino I don't know about your code. But dataChangedsignal must only be emitted for existing rows whose content is changed. You must emit begin/endInsert/DeleteRowsinstead for row inserts/deletes to make it work correctly. I already stated in my post that I think I should use beginInsertRows / endInsertRows, I was asking for the proper way of doing it. - VRonin Qt Champions 2018 last edited by VRonin. void addValue() { Record r = { QDateTime::currentDateTime().toString(), QString::number(qrand() % 1000) }; beginInsertRows(QModelIndex(),mRecords.size(),mRecords.size()); mRecords << r; endInsertRows(); qDebug() << "Added record" << r.date << r.value; } P.S. Make your model go through the model test there's a couple of points that should get triggered by that test in your model @VRonin That's really useful, thank you very much. Also I didn't know about the "Model test", that's nice too.
https://forum.qt.io/topic/95718/proper-way-of-resizing-model-in-qtableview/5
CC-MAIN-2019-43
refinedweb
456
50.12
From: Johan Jansson (johanjan_at_[hidden]) Date: 2003-02-07 14:13:32 Hi, I've done some testing of matrix representations to decide what we're going to use for a project, and I get strange results with uBLAS. What we need is efficient memory usage, fast large sparse matrix assembly (insertion speed) and fast large sparse matrix-vector multiply. Large here means on the order n = 1e6. Let's look at just assembly for now. We start by creating an 1e5 x 1e5 sparse matrix. We then successively insert 1e5 elements in the diagonal. What happens is that the insertion speed is linear for the first ~50000 elements, and then grinds to a halt. The initial insertion speed is somewhere at 1e5 elements/s, after 50000 elements it sharply falls to 1000 elements/s (to compare, our naive implementation gets 1e6 elements / s). This is quite bizarre. With an 1e6 x 1e6 matrix, the insertion speed is smoother, but slow throughout (on the order 1000 elements / s). I've tested this on two different computers, one dual Athlon and one PII laptop, and the result is identical (aside from the absolute speed numbers). Memory is not at all full, so that can't be an issue. The code for this simple test is at the end. The compiler used was g++-2.95 (in Debian) and Boost 1.29 (also Debian). I've also observed a quadratic time complexity (in the non-zero elements) in sparse matrix-vector multiplication. I think this has been brought up before though. We've also tested MTL, and while it doesn't produce these kinds of wild irregularities, the performance is a factor 2 or 3 worse than our naive implementation, and the memory usage is a factor 1.5-2 worse. This makes me question the claim that genericity does not add overhead (in practice). 10-20% overhead is acceptable, but when we have 100-200% overhead, both in performance and memory, then it makes it impossible to justify its use. Johan -- CUT -- #include <iostream> #include <boost/numeric/ublas/matrix_sparse.hpp> #define N 100000 using namespace boost::numeric::ublas; namespace ublas = boost::numeric::ublas; typedef ublas::sparse_matrix<double> uBLASSparseMatrix; typedef ublas::vector<double> uBLASVector; int main() { cout << "Creating a " << N << " x " << N << " matrix" << endl; uBLASSparseMatrix A(N, N); cout << "Assembling" << endl; for (int i = 0; i < N; i++) { A(i,i) = 1.0; if(i % 1000 == 0) cerr << "i: " << i << endl; } } -- CUT -- Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/02/44012.php
CC-MAIN-2021-10
refinedweb
432
65.52
Not currently set-up I do not have the NFS solution currently set-up. I have moved on to testing a glusterfs solution. From the testing and notes I took hard/soft made no difference. I was using /etc/fstab. Finally the 3-5 second hang is the problem, even if the file system recovers, httpd can't be without its files for that long, it gets in a bad way or dies. I would guess my other applications like asterisk would also die. NFS cluster Hmmm... I'm surprised that Apache (httpd) noticed that the NFS timeout of 3 seconds and actually died. Do you have error logs? Did other applications have problems? And honestly, why are you running httpd on 40 systems, along with other applications? I'd probably re-archictect my system to have application specific systems which are tuned for that specific app. This way I get the most performance, without worrying about other apps which might take up all the memory on the system, etc. I understand what you're trying to do here, and I'm sure it can be done. But you need to provide lots more detail of your setups, including logs, application settings, error messages, etc. One thought might be to have the NFS automounter setup so that you really only have a master/slave relationship. So if the NFS master goes away, the slaves look at the slave. This does NOT handle writing back to the NFS, it's really only good for read only loads. Which might not be what you're trying to achive here. Now, for the 3 seconds failover hangs. Which HA software are you using, and how is it configured? Are you using heartbeat over Ethernet? Serial cable? How low can you set your heartbeat timeouts? How do you handle STONITH (Shoot The Other Node In The Head) to make sure it's down? Do you have power strips that you can toggle via serial? That might be the simplest way to make sure that when the failover happens, it's quick and dirty and darn well going to happen. What you really want is an Active/Active solution, so that the clients can write to a single namespace, no matter which server they are writing to. But that might involve a more expensive network to get the bandwidth and latency down. Again, you really really really need to provide more details on what exactly you've tried, what errors you've gotten and hey, it would even help to understand *why* your app fails with a 3 second hang while the cluster fails over. Does it fail if you've just got a test app which opens a directory, reads all the files in there, computes to SHA1, then loops over the directory again and again doing the same read and compute of SHA1 and comparing it to the saved version. Print out the current time to read the file and ocmpute the hash, etc, in a loop. Then failover the cluster. Which reminds me, how do you failover the cluster in your testing? Do you yank cables? Hard power off the system? I've used Netapp clusters at work and when they fail over, the clients (mostly NFS) just hang until the cluster comes back the life. Most apps/users never notice. So I'm wondering why your httpd is so sensitive. Maybe you can tweak settings on httpd to not timeout so damn quickly. Cheers, and good luck! John Maybe you can use a tool like Maybe you can use a tool like Puppet to ensure that services are running, and started if they don't. Apache is a standard example for Puppet. Maybe the inotify (see) framework can help you monitor and fix such issues too. See my "red herring post" See my "red herring post" Working on a blog post Chad, I'm typing up my notes in a blog post and hopefully will have it online later tonight. It's actually an interesting exercise to work through. Great I look forward to it. Deep breath and get fine Deep breath and get fine reading here - this is hot discussion about iscsi and drbd:... Hot for geeks, if a question arise ;-) Great Link I have not read the whole thread yet, but it seems to be right on point with our discussion here. It seems everyone agrees the issue is how to fail over gracefully. ...still those posts don't ...still those posts don't give stright recipe to nirvana :-) For single client, it is tempting put drbd over iscsi. For many nodes, there are still several gotchas.? It is a hardware/cost issue I have about 40 nodes each with 1 80gb hd, in a 2.5" form factor. To change each of them over to hold 2tb of data is very expensive, and since each is only capable of holding 2 hds each, it is also limiting. What happens when the data grows to 5tb, 10tb, etc. My client with the really fast local raid arrays has a similar set-up to yours. One I designed in fact. The main difference is that I set up a development server for them that all their personal works on. When a change is committed they run a "push" application that rsyncs the files to the production machines, and nightly we run an "global rsync" to make sure nothing has drifted out of sync. This prevents the need to rsync every 10 min and the overhead that that requires (which as systems scale up becomes a problem).? Easy with DRBD and Pacemaker If you're interested, shoot me an email. I can set it up for you, and give you easy maintenance instructions. I looked at this combo I know that DRBD works, and I am petty sure pacemaker is just heartbeat under another name. Can you tell us about your solution? I do want this set-up, but I also want to share the solution with the Linux community. Can you post a short paraphrased "how to" so we can understand what you are proposing? Greetings, Greetings, Does this serve the purpose? Also how about mounting the nfs with udp rather than tcp? Regards Rajagopal Thanks for the resource Rajagopal, Thanks for the link I intend to read it carefully. I looked at NFS over UDP, but everything I read said to stay away from it. Do you know if something has changed? I will see if I can work with it after reading the pdf. So you want a ha file system So you want a ha file system on several systems. you have 2 storage servers which will contain the disks in a ha way. you then want to export some sort of bock device or file system out to the other servers. I have not played with drbd but i am assuming that this replicates a block device between the two file servers and it will give you a virtual block device, it appears that you can then combine this with lvm to split out the volumes. Next you need to export this to the servers so you will need to setup an iscsi target. this can use any block device so setup drbd and lvm and export this. setting this up is a bit of a pain using debian as you need to compile the kernel driver but there are guides. On the client side you will be mounting this block device on all nodes in that cluster and using a clustered file system ti make it work. I work as an oracle DBA and use ocfs2 though gfs might also do the job.. setup ocfs between the nodes this involves entering the ip addresses of the other nodes and configuring it to load the services on startup. you still need something application level to to be cluster aware, reads will be fine but you could potentially have some file level corruption if the two copies of the application write to the same file. an oracle RAC database will run fine in active active but for postgres or mysql you want only one node at a time. feel free to contact me i have setup a single iscsi target to multiple servers but not plated around with having active active on the storage level. OCFS network locking? Since this file system will be used for may applications it is not really practical to leave the write lock in the application space. We need it at the file system level. I thought OCFS was a true network file system and did allow for network locks, and everything you suggest is reasonable, as a matter of fact I have tried it almost exactly as you have listed (with the exception of lvm, which reduces performance and the features it provides are not needed for my implementation). The real question is how do we handle a server failure? How do we take it out of service? How do we re-sync it on recovery? How do we keep clients from hanging when it is down? Stateless? Is NFS stateless? If so, you ought to be able to have the backup server mirror the primary (including who's mounting its FSs if this is applicable). When the primary server crashes, have the secondary server assume the primary's MAC and IP addresses and become the primary. When the failed server restarts, it re-syncs and fetches the secondary MAC and IP addresses from the primary server. If the secondary server crashes, the primary should assume the secondary's MAC and IP addresses. When the secondary returns, it re-syncs and gets the secondary addresses from the primary, as expected. No it is not NFS is not really stateless, but that is not a problem as you can make the files needed part of the replicated data between the 2 servers. Your solution is a fine one, and is one of the things I tried (see "Highly Available NFS Server"), the problem is all the magic of switching the IP fast enough and seamlessly enough not to hang the clients. Not stateless In my HA/DRBD/NFS server cluster I store /var/lib/nfs/ on the DRBD disk so that the NFS state data is available to either cluster node when it is active. I use a drbdlinks heartbeat resource to make /var/lib/nfs a symlink to the /var/lib/nfs/ directory on the DRBD disk(/fserv/var/lib/nfs/ in my case ). This does help with recovery after a failover. It is still not perfect though. I wish I had something to suggest... ...but I have read every comment with great interest as this subject has had me scribbling "wouldn't it be great if..." diagrams many a time. Ultimately doesn't it all come down to whatever piece of kit your server is connected to right now to provide the file service must be replaced almost instantaneously with it's backup if it disappears for whatever reason. Even as a mental exercise that's a difficult one to achieve. Whatever mechanism the two devices use to know the other is still alive (or not) has to have a latency of half that you require for the service they provide. My main reason for commenting is so I can see the follow ups. I look forward to a solution being pieced together and published in full in an LJ later this year! -- Smotsie Dad.husband.linux-loving-geek.radio-presenter.eco-geek Thanks Well thank you for the support. I am hoping someone out there, much smarter than I am, will solve this for all of us. I'm not 100% sure this is I'm not 100% sure this is what you're looking for, but have you seen the Nasuni Filer? It uses cloud storage (Amazon's S3 is currently the cheapest provider), but has intelligent caching (for local-fileserver speeds) and uses encryption to secure your data. Lustre Update After reading their docs very carefully I have already decided that Lustre is not really an option as it uses load balancing for fail over. This seems to be very common, network file systems are designed and created by someone, but the authors don't seem to think that the fail over/load balancing is part of the file system so they leave that to a 3rd party solution/application. This is fine except that I have not found a load balancing solution that is not designed for file systems (with near instantaneous fail over). Anyone have a really fast load balancer solution? I like your DRBD option I'm curious as to how you mounted the filesystem on the client when using DRBD. I'd probably have tried using the automounter to mount the filesystem, as that tends to recover more gracefully than having something hard-coded in /etc/fstab. I think you're on the right track though. If you still have the DRBD setup, try using autofs to mount the filesystem on the client and see how that works. Bill Childers is the Virtual Editor for Linux Journal. No one really knows what that means. Re: I like your DRBD option I used DRBD just to replicate the data partition on the server, so that on fail over, the 2nd server would go active and have an exact copy of the primary server in near real time. This worked fine. The problem is the delay in the load balancer switching to the secondary server. The load balancer needs 3+ seconds to make the switch, and this delay makes all the NFS clients hang. I will have to read/research the autofs mount option and see if that some how overcomes the short comings of the NFS hang. Have you tried this yourself with a standard partition? Basically just mount an NFS partition and then block the client with iptables on the server, then see if the NFS mount hangs on the client. When it does (which it will) release the iptables block and see if the NFS mount recovers. If it does then we will be 1 step closer to making the DRBD + NFS solution work. I've rebooted NFS servers I've rebooted NFS servers where the clients are using autofs to mount, and it seems to recover relatively gracefully. Whether it can do it within the time period you mention is another story. It's an interesting problem, and one I'd love to test in the lab, but my dance card's full for a while. :( Bill Childers is the Virtual Editor for Linux Journal. No one really knows what that means. My to do I will add it to my to do as well and see if I can make time to test it. Any other readers out there have a few hours to kill? autofs I just want to say thanks for posting such a great topic! I'm really surprised that you've not tried autofs. This should definitely solve the hanging problem on the clients. According to RedHat (I know you're not using them) this is the way NFS should be used exclusively. I've not used NFS without autofs since RHEL (or CentOS) 4.0. Also, just to put things in perspective, a 3 second delay is NOT a lot. With EMC CLARiiON SAN arrays, which are definitely not within your budget, the time it takes to trespass (fail-over) LUNs from SPA to SPB (the two service processors in the array) is normally in the 30-second range, PowerPath (EMC's multipathing software) has a default timeout of 60 seconds. I'm not saying that you need to have resiliency on the application layer, but you can definitely have resiliency on the client's OS layer (below the application) and autofs definitely gives you that. Thanks. A couple of options Cleversafe () might be an option, though I'm not sure how well it scales to the low end. Basically it distributes parts of a disk block to multiple storage nodes. It codes the block parts so that you only need a subset of all the nodes alive to reconstruct the block. It then has an iscsi interface so you can use multipathing to provide redundancy to the cluster itself. Sean Interesting but improperly sized From what I have read this seems like a reasonable approach. However it seems to need 16 servers (based on their examples) and I only have 2 servers. If I was deploying something much larger I would definitely try this out, but as it stands it is just to big for my needs and resources. I can't even test it to see if I can find any holes in it. I found that several solutions seemed to work on paper then had issues in the real world. When I played with it I was When I played with it I was using 2 servers, which basically gets you a RAID-1 configuration. It's all done in the setup of the volumes. Sounds a lot like glusterfs This sounds a lot like glusterfs, which I have set-up already. Glusters problem is that when a server goes down it takes about 5 seconds (one time) for the server to time out, it then gets marked by clients as down and the delay is gone. The problem is that the 5 second delay is long enough to kill httpd. I might try Cleversafe and see how long it takes them to recover from a downed server. What's your primary What's your primary application for this? Trying to build a redundant file system is hard, as you've shown. Often it's easier to build redundancy closer to the application level. Mogilefs might be a better approach if you're looking to serve a huge number of files over HTTP and want availability... FWIW, Cleversafe has a file based store that you can access through HTTP instead of iSCSI. It's mostly made to be an origin server for CDN because they're going the "lots of slow cheap disk" philosophy for their commercial offering. Mogilefs does not work Mogilefs requires that every node has enough HD for all the files. Since my servers have 1.5 tb and my nodes only have 80 gb, this is not going to work for me. Although it may work for others. ? Mogile balances out the ? Mogile balances out the files across the storage nodes based on your duplication policy. But it's application level, so it won't meet your criteria of a general purpose file system. A general file system I would use this as a general file system. That includes using it for http files, config files, asterisk recordings, e-mail, ftp, and more. (Note I do not include log files, as I feel you should log locally and merge nightly.) The idea for me is to be able to create a farm of servers that all share the same storage back end. When we (the linux community) talks about clusters, mostly we talk about computer clusters. However I have a more general cluster solution in mind. I don't want/need many computers to solve a single problem, I have many users that want access to many services. Most applications that my users want is more or less stateless (think http and e-mail), in that there is nothing special about a single server in my cluster except for the data. If I have shared storage I can throw almost an unlimited number of servers at the cluster and handle as much traffic as my bandwidth will allow. Network latency.. You've really opened a good subject and I'll be watching this thread pretty closely too... I've been wanting to implement a similar solution and have had the same results... As for network latency... to my important clients I've gone as far as binding multiple nics in an effort to decrease access times... It works, but it's not a real good solution for multiple clients, unless of course you have the budget. Jerry ---- Jerry McBride Latency solutions Jerry, For the most part I can live with the latency issues, I do not need super fast data. I do have some clients that run local SAS raid arrays on every server just to overcome the data speed issues, so I understand the need for fast data. I like to say when it comes to storage, you can either have big, slow, and cheap or small, fast, and cheap, but if you want big and fast, it won't be cheap. I think the same applies here. If you want fast, it will cost you to go to FC on all the servers. Even with bonded NICs (which I do run on some critical servers) There is just a limit of what you can do over copper. Chad, I think you've made Chad, I think you've made some important comments here. All in all, * Storage solutions are not a "one size fits all". It sounds like you have solved the storage problem for 95% of your infrastructure as for the remaining 5% , the question needs to be, Why do you need a speedy recovery? Is it for file-locking of a particular application? Is it to keep users happy? Is it a particular application that can't handle a 3 second timeout? Perhaps the remaining bit is best handled by throwing a different solution at it. this will cost less money and end up helping a lot in the long run because the solution is tailored for the problem you are trying to solve. if it's for users on the web who need dynamic content, throw something static at them for 3 seconds and then make it dynamic when the hiccup is over. if it's for a relational DB, you will probably have to get something fast like FC, no other way to do it. if it's file-locking, you already know how to use file-systems that can handle file contention, just fine-tune them if it's an application, talk to developers and ask them why there needs to be less than 3 seconds between file reads/writes, etc. I hope all this helps It may come to that I agree, that I have a solution that works as long as nothing is down, and you are right the systems are up 95+ % of the time. It may be that it can only be done with special hardware (FC) and/or software (proprietary), but I hope that is not the case.. It also means the load will spike as requests get queued up on the load balancer and may overwhelm the nodes as the flood gates are opened. All of this and we are only taking about 1 out of 10-12 applications that will depend on the file system. It would not be reasonable/acceptable for the network file system solution to make a simple outage into a complex one. Which the httpd issue could be. I don't necessarily agree that focusing on the application side would cost less money. A commercial solution to the problem is a one time cost, and trying to get 10-12 sets of application developers to reprogram their code could take many man hours of work (read paying salary) or may not be possible at all (read busy open source developers that don't have time to modify their application). I may be at the end of the road on this, and it could be that there is not an inexpensive solution to the problem, but I am not ready to cry uncle yet ;) I am not ready to cry uncle yet Shad, I don't know a s**t about this stuff but I've read all the posts here and your very last sentence convinced me to write a note to show my support. Animo amigo.
http://www.linuxjournal.com/content/storage-cluster-challenge-lj-staff-and-readers?page=1&quicktabs_1=0
CC-MAIN-2017-13
refinedweb
4,027
70.43
I am trying to search for a word out of these 2 Principles. But by code displays both of the principles not just one. How could I make it display only the principle where the word is located. import re patterns = [ 'work' ] text = """\ 1. Principle of Segmentation: a) Divide an object into independent parts - Replace a large truck by a truck and trailer. - Use a work breakdown structure for a large project. - Sequential novel turn into movies b) Make an object easy to disassemble. - Bicycle disassembling (saddle, wheels) - Furniture (sofas, table) - Crib c) Increase the degree of fragmentation or segmentation. - Replace solid shades with Venetian blinds - Computer covers with holes for ventilation - Multi blade cartridge Razor 4. Principle of Asymmetry: a. If an object is symmetrical, change its shape to irregular. -Asymmetric paddles mix -cement -truck -blender -cake mixer """ for pattern in patterns: print 'Looking for "%s" in "%s" ->' % (pattern, text), if re.search(pattern, text): print 'found a match!' else: print 'no match'
https://www.daniweb.com/programming/software-development/threads/380981/how-to-improve-search
CC-MAIN-2017-09
refinedweb
164
59.7
Welcome to the Ars OpenForum. For me I look to motivation and how the failure came about. If I determine that it wasn't egregious and their intent was to create a good game but they came up short, but then have since rectified it or made large improvements I could see myself returning to a game.But if, in my heart of hearts I only see greed and laziness and outright lies, I usually will not or do not return (or even begin).When that guy came out and said the things he said that were plainly not true and would have been obvious falsehoods given what he knew at that point, I can't in good conscience see myself playing this game. I think the big difference between the games that did eventually get to good state is that they haven't launched intentionally as MVP, though they did launch with issues. Take Rainbow 6: Siege or The Division 1, they've launched with serious multiplayer and hacking issues. However Ubisoft quickly iterated on patches. For TD1 you had update every week with meaningful fixes. Eventually it got to a very good state. Compare it to Fallout 76 and Anthem, which not only launched with bugs, but also lacking meaningful content. And the companies responsible are slow to fix anything at all. Also, I think the player attitude has evolved. As there are more games then ever to choose from, gamers have less patience. BF4 launched with a lot of issues, mainly related to server stability and netcode. They got to fixing it, eventually got to the point where BF4 netcode was one of the better (if not the best) in the industry. BF4 went on to be a very popular game. Fast forward few years, BFV launched with issues, no one stuck around, and BFV is now completely abandoned, EA had to reduce number of game modes to make matchmaking viable. Actually, Todd, it's exactly what I expect from Bethesda.Which is why I kept my money in my wallet until I saw how it turned out. In the end, I got a free beta key and get to see how glad I was I didn't pay for it firsthand. Fallout 76's biggest crime is showing us a world that would have made an outstanding $39.99 DLC add-on for F4 if they'd just not messed with what works. We've have spent the past 8 months complaining about same-old, same-old game play mechanics....but we'd have still been playing. I'm baffled by the inclusion of Final Fantasy XIV ("FFXIV") in this line: I think some game concepts are too good to let fail so gamers rally around them even though the launches are epic disasters. Whether or not it survives depends on the publishers willingness to keep banging on it until the issues are sorted. There is a small window where the patience of the gamers and the enthusiasm the devs show for fixing it must meet. Otherwise it's a write-off. Sometimes a community will form around the game that attempts ot fix its shortcomings when the developer abandons it but that is kinda rare and takes some special talent to be present. You know where this is all going right? This release now and patch later?Game streaming services and Netflix-esque game passes like Xbox's Game Pass. Gamers are going to want to be able to try out a game and then come back when the game is actually "good." The problem is this will foster even more mediocre games because now they're just sucking off the licensing teat of whatever streaming or rental service is out there. To me, Fallout 76 was a case of "its almost there". I think that Not-Massively-But-Still-Multiplayer model was ultimately an inspried choice. But the choice that there should only be one single living interactable NPC that I've seen (A wandering meat vendor) ultimately sunk it.I get that rather than being Post-Apoc, its Post-apoc-nova-apoc (the apocalypse you have after the apocalypse), but I think a very small number of NPCs in Hiding could have made all the difference in the feel of the game.The other stuff - Inventory, balancing, et al, that stuff I truly believe is forgivable as we expect that stuff to be fixed in patches. I pre-ordered one game in 2019. Devil May Cry 5... 2 days before it's release so I can pre-load and have it ready to play. I don't regret my decision, DMC5 is a phenomenal game. I loved every moment of it. Among many people I knew who hyped No Man's Sky in 2016, I was skeptical. Mainly because I knew how a 64bit generator worked, and how Sean Murray got his 18 quintillion planets. (( literally the upper limit of a 64 bit number )). When it was released, I was able to say "I told you so.", but deep down, I was super disapointed. I wanted a 3D Starbound (great game btw), and hoped that No Man's Sky was it.Fast forwards now, I bought it during the Steam Winter Sale 2018, and logged 80 hours in the game. I absolutely love the game now. I'm always happy to see game devs sticking with a failed product and making it great. Yeah, the game was launched in a bad state, but if a bad game gets good, I reward the time and energy in making that game good, not in releasing a bad game. I won't touch Anthem, I don't think that'll ever be a great game because of how EA's handling that game... Fallout 76 is absolute garbage and will always be garbage. It was deeply terrible on a conceptual level. The developers simply didn't understand what people liked about previous Fallout games. You can polish a turd, but at the end of the day you just have a shiny turd.And on a related note, developers have gotten so lazy. Prior to the 360/PS3 gen when internet access was still sporadic or somewhat sporadic, they released polished games that worked perfectly at launch out of the box. They've become way too reliant on patches and DLC to fix broken and unfinished games. I'd like to see more games that are finished, polished, and pretty much perfect on day 1. A perfect example of the old way of making games is Halo CE. It's one of the most polished and balanced shooters ever made...with no patches (because live hadn't even launched when it came out). You just put in the disc and it works pretty much perfectly. If you are having to rely on gigantic patches that come out immediately after a game launches you have failed as a developer. LOL>>"any time you're going to do something new like that you know you're going to have your bumps."New... what new? It's the same crap but with multiplayer. Alright, if that is the way of the industry then no more preorders, no more release day purchases and wait for the first major update before buying. Got it, thanks. My entire feeling in FO76 is that it's just like a minor side title that lacks content, has a shitty multiplayer implemtnation and oh yeah suffers from everything that somehow became MORE broken in FO4.Oh and somehow manages to look worse than FONV (I realize from some technical angles it's better, but it's also possible for better to be worse, which is what happened here and with FO4.)
https://arstechnica.com/civis/viewtopic.php?p=37468799
CC-MAIN-2019-39
refinedweb
1,295
71.75
Abstract: Final fields could not be set with reflection in Java 1.1. In Java 1.2 they changed it so that we could set final fields. We again couldn't in Java 1.3 and 1.4. Since Java 5, we can again set final fields. Welcome to the 96th edition of The Java(tm) Specialists' Newsletter. We are slowly but surely approaching the 100th newsletter edition, which I am hoping to send on the 30th of November, our 4th anniversary of The Java(tm) Specialists' Newsletter. In my last newsletter, I invited you to say "hello" with yahoo messenger. I had some interesting conversations since then with readers from around the world. I decided to rather not publish my yahoo ID on my website, imagine if 450'000 TheServerSide readers were to suddenly decide that they all wanted to be my friend ;-) So, my yahoo ID is special knowledge, reserved for my subscribers :) The result of the survey was interesting. Of the 80+ emails that I received, about three quarters felt that generics would make Java code more maintainable. Due to Java 5 having just been released, none of them had started using generics in production code. Most of my customers are using JDK 1.4.2, some are on 1.4.1 and others still on 1.3.1. It will be at least another six months before some of them will start moving to Java 5. javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge. Narve Saetre from Machina Networks in Norway sent me a note yesterday, mentioning that it was a pity that we could change the handle to a final array. I misunderstood him, and started patiently explaining that we could not make an array constant, and that there was no way of protecting the contents of an array. "No", said he, "we can change a final handle using reflection." I tried Narve's sample code, and unbelievably, Java 5 allowed me to modify a final handle, even a handle to a primitive field! I knew that it used to be allowed at some point, but that it was then disallowed, so I ran some tests with older versions of Java. First, we need a class with final fields: public class Person { private final String name; private final int age; private final int iq = 110; private final Object country = "South Africa"; public Person(String name, int age) { this.name = name; this.age = age; } public String toString() { return name + ", " + age + " of IQ=" + iq + " from " + country; } } In JDK 1.1.x, we were not able to access private fields using reflection. We could, however, create another Person with public fields, then compile our class against that, and swap the Person classes. There was no access checking at runtime if we were running against a different class to the one that we compiled against. However, we could not rebind final fields at runtime using either class swapping or reflection. The JDK 1.1.8 JavaDocs for java.lang.reflect.Field had the following to say: final, the method throws an IllegalAccessException. In JDK 1.2.x, this changed a bit. We could now make private fields accessible with the setAccessible(true) method. Access of fields was now checked at runtime, so we could not use the class swapping trick to access private fields. However, we could now suddenly rebind final fields! Look at this code: import java.lang.reflect.Field; public class FinalFieldChange { private static void change(Person p, String name, Object value) throws NoSuchFieldException, IllegalAccessException { Field firstNameField = Person.class.getDeclaredField(name); firstNameField.setAccessible(true); firstNameField.set(p, value); } public static void main(String[] args) throws Exception { Person heinz = new Person("Heinz Kabutz", 32); change(heinz, "name", "Ng Keng Yap"); change(heinz, "age", new Integer(27)); change(heinz, "iq", new Integer(150)); change(heinz, "country", "Malaysia"); System.out.println(heinz); } } When I ran this in JDK 1.2.2_014, I got the following result: Ng Keng Yap, 27 of IQ=110 from Malaysia Note, no exceptions, no complaints, and an incorrect IQ result. It seems that if we set a final field of a primitive at declaration time, the value is inlined, if the type is primitive or a String. In JDK 1.3.x, Sun tightened up the access a bit, and prevented us from modifying a final field with reflection. This was also the case with JDK 1.4.x. If we tried running the FinalFieldChange class to rebind the final fields at runtime using reflection, we would get: java version "1.3.1_12": Exception thread "main" IllegalAccessException: field is final at java.lang.reflect.Field.set(Native Method) at FinalFieldChange.change(FinalFieldChange.java:8) at FinalFieldChange.main(FinalFieldChange.java:12) java version "1.4.2_05" Exception thread "main" IllegalAccessException: Field is final at java.lang.reflect.Field.set(Field.java:519) at FinalFieldChange.change(FinalFieldChange.java:8) at FinalFieldChange.main(FinalFieldChange.java:12) Now we get to JDK 5.x. The FinalFieldChange class has the same output as in JDK 1.2.x: Ng Keng Yap, 27 of IQ=110 from Malaysia When Narve Saetre mailed me that he managed to change a final field in JDK 5 using reflection, I was hoping that a bug had crept into the JDK. However, we both felt that to be unlikely, especially such a fundamental bug. After some searching, I found the JSR-133: Java Memory Model and Thread Specification. Most of the specification is hard reading, and reminds me of my university days (I used to write like that ;-) However, JSR-133 is so important that it should be required reading for all Java programmers. (Good luck) Start with chapter 9 Final Field Semantics, on page 25. Specifically, read section 9.1.1 Post-Construction Modification of Final Fields.. It even says this: If a final field is initialized to a compile-time constant in the field declaration, changes to the final field may not be observed, since uses of that final field are replaced at compile time with the compile-time constant. This explains why our iq field stays the same, but country changes. Strangely, JDK 5 differs slightly from JDK 1.2.x, in that you cannot modify a static final field. import java.lang.reflect.Field; public class FinalStaticFieldChange { /** Static fields of type String or primitive would get inlined */ private static final String stringValue = "original value"; private static final Object objValue = stringValue; private static void changeStaticField(String name) throws NoSuchFieldException, IllegalAccessException { Field statFinField = FinalStaticFieldChange.class.getDeclaredField(name); statFinField.setAccessible(true); statFinField.set(null, "new Value"); } public static void main(String[] args) throws Exception { changeStaticField("stringValue"); changeStaticField("objValue"); System.out.println("stringValue = " + stringValue); System.out.println("objValue = " + objValue); System.out.println(); } } When we run this with JDK 1.2.x and JDK 5.x, we get the following output: java version "1.2.2_014": stringValue = original value objValue = new Value java version "1.5.0" Exception thread "main" IllegalAccessException: Field is final at java.lang.reflect.Field.set(Field.java:656) at FinalStaticFieldChange.changeStaticField(12) at FinalStaticFieldChange.main(16) So, JDK 5 is like JDK 1.2.x, just different? Do you know when JDK 1.3.0 was released? I struggled to find out, so I downloaded and installed it. The readme.txt file has the date 2000/06/02 13:10. So, it is more than 4 years old (goodness me, it feels like yesterday). JDK 1.3.0 was released several months before I started writing The Java(tm) Specialists' Newsletter! I think it would be safe to say that very few Java developers can remember the details of pre-JDK1.3.0. Ahh, nostalgia isn't what it used to be! Do you remember running Java for the first time and getting this error: "Unable to initialize threads: cannot find class java/lang/Thread"? I personally think it is risky to change something so fundamental to the language. I will read through that specification in detail to see what else has changed ... Gotta get some beauty sleep now - tomorrow I need to work on an access control system in Java for a client....
https://www.javaspecialists.eu/archive/Issue096-Java-5---final-Is-Not-Final-Anymore.html
CC-MAIN-2020-45
refinedweb
1,365
58.38
hello I have fez panda II but when I test on deployment tool I take this message what should I do please help me :’( I didnt use for a long time :’( Device has old or unsupported configuration error :'( :'( but when I try to ping, I take “Pinging… TinyCLR” message when I try on visual c# I take "Error 1 An error has occurred: please check your hardware. Mfdeploy can only be used to ping panda and to get version number. Nothing else is supported. Other fez support everything. what will I do to use my fez panda undate your firmware. That is the BEFORE YOU SUBMIT pointer #1 that Gus mentioned earlier. When you reply to this post, look for the link. Click it. Follow the instructions. Then try again. I updated my firmware to 4.1.8 it was no problem when I send ping there is no problem when I check my version on deploy tool it is correct 4.1.8 when open in visual c#, I took this error message Error 1 0xfd000000 C:\Users\SONY VAIO\AppData\Local\Temporary Projects\FEZ Panda II Application1\MMP FEZ Panda II Application1 what does it mean what will I do to use my fez panda II Have you tried these exact steps yes when I try first project there is no problem but when I start new project->fez panda II project, and debug blinking led I take this error Then you still have unmatching firmware. Do not make any changes in new created projects and … - load console application, does it work? This should not fail - load panda II blink LED, does it fail? If fails, you do not have the correct firmware. yes console application is working correctly but when I apply blinking led error I get is ------ Deploy started: Project: MFConsoleApplication1, Configuration: Debug Any CPU ------ An error has occurred: please check your hardware. Object reference not set to an instance of an object. Source: Microsoft.SPOT.Debugger.CorDebug Stack : at Microsoft.SPOT.Debugger.VsProjectFlavorCfg.Deploy() in c:\depot\current\CLIENT_V4_1\Framework\CorDebug\VsProjectFlavorCfg.cs:line 893 at Microsoft.SPOT.Debugger.VsProjectFlavorCfg.<Microsoft.VisualStudio.Shell.Interop.IVsDeployableProjectCfg.StartDeploy>b__0() in c:\depot\current\CLIENT_V4_1\Framework\CorDebug\VsProjectFlavorCfg.cs:line 634 ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ========== ========== Deploy: 0 succeeded, 1 failed, 0 skipped ========== my firmware is ClrInfo.clrVersion: 4.1.2821.0 ClrInfo.clrVendorInfo: Microsoft Copyright © Microsoft Corporation. All rig ClrInfo.targetFrameworkVersion: 4.1.2821.0 SolutionReleaseInfo.solutionVersion: 4.1.8.0 SolutionReleaseInfo.solutionVendorInfo: GHI Electronics, LLC SoftwareVersion.BuildDate: Dec 22 2011 SoftwareVersion.CompilerVersion: 410561 release note: Important note: Requires .NET Micro Framework 4.1 and Visual Studio 2010 • USBizi (FEZ Mini, FEZ Domino, FEZ Rhino, FEZ Panda) V 4.1.8.0 • EMX (FEZ Cobra) V 4.1.8.0, TinyBooter V 4.1.6.0 • ChipworkX V 4.1.8.0 TinyBooter V 4.1.5.1 • GHI NETMF Library V 4.1.8.0 please help me I have school project I need this device I do not quite understand the first statement. If I read it correctly, you have a project that you used to use with your Panda a while back. You updated the SDK and the firmware on the Panda, the project builds ok but when you deploy it you get the error. If that is the case, then your need to go the the references list in the project. Make a note of the references you have selected. Delete all but the “mscorlib” and re-add them. It is possible that the references are pointing to “old” SDK assemblies. If I am way off base on this I appologize, its just that I recently reserected an old project, replaced all the references that had warnings on them (trying to build with the warning tags on the references gives you a build error). When I tried to deploy, I got the same error you have. Once I replaced all the references, my project worked. Who knows this may work for you, it can’t hurt. thanks for answer… when I had 4.1.5 firmware I took this error cause of this I updated firmware but nothing changed new event I tried without fez_pandaII_GHIelectronics reference I didnt take this error but code didnt work code is this using System; using Microsoft.SPOT; using System.Threading; using Microsoft.SPOT.Hardware; namespace MFConsoleApplication1 { public class Program { public static void Main() { OutputPort LED; LED = new OutputPort((Cpu.Pin)4, true); Thread.Sleep(Timeout.Infinite); } } } there is no change pin4 have I warranty about this device? You still need to have the references. The failure to work may be due to the missing references. My point was that you needed to replace them.
https://forums.ghielectronics.com/t/device-has-old-or-unsupported-configuration-error/3025
CC-MAIN-2019-22
refinedweb
793
67.04
10 February 2010 17:54 [Source: ICIS news] WASHINGTON (ICIS news)--The US Department of Energy (DOE) said on Wednesday it expects US prices for oil and natural gas to hold mostly steady through 2010, with a slight gain for crude and larger increases for natural gas prices in 2011. The department’s Energy Information Administration (EIA) said in its monthly short-term energy outlook (STEO) that the price of ?xml:namespace> That estimate is just a dollar higher than the administration’s 2010 crude price forecast of a month ago. The administration said it expects WTI prices to average about $84/bbl next year, the same estimate it issued in its January outlook. “The world oil market should gradually tighten in 2010 and 2011 as the global economic recovery continues and world oil demand begins to grow again,” the administration said. However, the outlook report said, “the combination of high commercial inventories among members of the Organization for Economic Co-operation and Development (OECD) and ample OPEC surplus production capacity should help dampen the likelihood of any large upward surge in prices”. The OECD is made up of the major industrialised countries. In natural gas, the administration said it expects the Henry Hub spot price to average $5.37/m Btu in 2010. Although that forecast is essentially even with last month’s EIA outlook for a 2010 average of $5.36/m Btu, the price forecast is still $1.42 above the 2009 average price of $3.95/m Btu. The price and availability of natural gas are of crucial importance to the The EIA said that an increase in coal-fired electric power generation in the “The projected 1.3% decline in electric power sector natural gas use is offset by growth in the residential, commercial and industrial sectors in the 2010 forecast,” the outlook said. “The outlook for growth in total natural gas consumption in 2011 comes from increases in the industrial sector as a result of improved economic
http://www.icis.com/Articles/2010/02/10/9333156/us-sees-steady-oil-and-gas-prices-through-2010-gains-in-2011.html
CC-MAIN-2014-10
refinedweb
333
50.16
class boy: """Details""" def __init___(self): print("Details") def name(self,x,y,z): a=x.title() + ' ' b=y.title() + ' ' c=z.title() nam = a + b + c print(nam) def age(self,nu): f = int(num) if f>18: return 1 else: print("\nSorry you must be at least 18 years to continue") return -1 def p_details(self,cc): #need to print all detals by calling this method. print("Name :" + self.nam ) print("Age :" + self.f) print("Country :"+ con) a = raw_input("Enter the first name :") b = raw_input("Enter the middle name :") c = raw_input("Enter the last name :") num = raw_input("Enter the age :") inp=boy() if(inp.age(num) == 1 ): con = raw_input("\nPlease enter your country :") inp.p_details(con) class boy(): def __init___(self, num): self.age = int(num) num = raw_input("Enter the age :") inp = boy(num) if inp.age < 18: .... Your class should work like this. The way you wrote it, inp has no age attribute to evaluate. But as pointed out in the comments, there's really a lot wrong with your code there. The way you're using methods suggests you aren't clear on how classes and methods work, exactly. Also, getting user input opens up a whole host of issues - like what happens if the user doesn't enter a numeral when asked, and instead types, say, "old enough"?
https://codedump.io/share/5DpGOWnMWhoE/1/how-can-two-methods-in-same-class-can-access-it-members-in-python
CC-MAIN-2017-04
refinedweb
223
73.37
Hello, readers. Welcome to codegigs. If this is your first time on the site, I would suggest you to bookmark us. We provide detailed, explanatory articles for data science. In this article, I’ll try to cover all the base knowledge you’ll need for getting started with OpenCV, and then we’ll do a project along the way. So let’s get started -> Google colaboratory setup First, we’ll be opening up Google colab, our trusted python notebook. Go to and sign in with your Google account. Next, we’ll download a video from youtube for use. Downloading videos from youtube We’ll be installing a handy tool that I use a lot daily – YoutubeDL. The code for installing it is given below: !sudo pip install --upgrade youtube_dl !youtube-dl "" This will download a funny cat video to your local colab environment! Now let’s briefly go over the basics of OpenCV. Basic knowhow of OpenCV OpenCV stands for Open Source Computer Vision Library. It came out around 2000 and has seen a significant amount of support in the community. (Even though we’ll be using python in this article, know that the same can be performed using any other programming language, so take your pick.) OpenCV-Python is a library of Python bindings designed to solve computer vision problems. Numpy, a highly efficient library for numerical operations with a MATLAB-style syntax, is used by OpenCV-Python. Install OpenCV directly using pip : pip3 install OpenCV-python Now let’s work with an image first. You can use any. I used the 1. Simple matrix operations on images The scaling of images is referred to as image resizing. Scaling is useful in a variety of image processing and machine learning applications. It aids in the reduction of the number of pixels in an image, which has various advantages, for example. It can reduce the time it takes to train a neural network since the more pixels in an image there are, the more input nodes there are, which raises the complexity. Get the picture: !wget -O "pic.jpg" "" import cv2 image = cv2.imread("pic.jpg") plt.imshow(image) plt.show() Gaussian = cv2.GaussianBlur(image,(5,5),2) plt.imshow(Gaussian) Median filtering is commonly employed in digital image processing because it preserves edges while reducing noise under specific conditions. It’s one of the most effective algorithms for eliminating salt and pepper noise. median = cv2.medianBlur(image,1) plt.imshow(median) consider this image which is very noisy: Now if we use median blur on it: !wget -O "balloons.jpg" "" image2 = cv2.imread("balloons.jpg") median = cv2.medianBlur(image2,5) plt.imshow(median) We can see that the image has been completely denoised! To convert this image into grayscale, gray_image = cv2.cvtColor(median, cv2.COLOR_BGR2GRAY) plt.imshow(gray_image) Let’s download another picture : !wget -O "pic.jpg" "" image=cv2.imread('pic.jpg') image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image) We know to grayscale the image: gray_image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) plt.imshow(gray_image) There are a few other modes, like hsv (hue, saturation, value): hsv_image=cv2.cvtColor(image,cv2.COLOR_BGR2HSV) plt.imshow(hsv_image) We can type image[0,0] #This is the B,G,R values of the first pixel We can create histograms of the BGR channels too! We use the cv2.calcHist along with matplotlib. import matplotlib.pyplot as plt color = [‘b’,’g’,’r’] #the enumerate is a built in function which gives you tuples of (index,list_element) for i,col in enumerate(color): hist=cv2.calcHist([image],[i],None,[256],[0,256]) plt.plot(hist,color=col) plt.xlim([0,256]) plt.show() The calcHist function has 5 arguments: 1. [image] (even though it is already a matrix, we need to give another square bracket) 2. channel: [0] = blue (for color images) / grayscale (for grayscale images) [1] = green [2] = red 3. mask: we can either create a mask/selection of the image (later), or the value “None” gives us the full scale image 4. bin size: this is the bin size of the histogram. [256] = full scale 5. range: generally [0,256] 2. Scaling up This is an important functionality that I feel we should learn – image_scaled_by_fourth=cv2.resize(image,None,fx=0.25,fy=0.25) # arguments are: (image, dimensions of output(width,height), x_scale, y_scale) While the above code gives us a 1/4th image, we can increase the size of an image by interpolation techniques. To scale it up, we can use any of these 5 interpolation techniques: 1. INTER_AREA 2. INTER_NEAREST 3. INTER_LINEAR 4. INTER_CUBIC 5. INTER_LANCZOS4 image_zoom_2 = cv2.resize(image,None,fx=3,fy=3,interpolation=cv2.INTER_LANCZOS4) plt.imshow(image_zoom_2) Another method to quickly scale images is using the inbuilt pyramid functions: pyrDown pyrUp, which are relatively easier and less technical. 3. Crop and Brighten/Darken images There is no inbuilt function to crop images. BUT we can use NumPy slicing on the image matrix! h,w = image.shape[:2] cropped = image[int(h*.25):int(h*.75),int(w*.25):int(w*.75)] plt.imshow(cropped) We can also brighten or darken images: M=np.ones(image.shape,"uint8") added=cv2.add(image,M*25) subtracted=cv2.subtract(image,M*25) plt.imshow(added) plt.imshow(subtracted) Text manipulation This is used to bring more clarity to the text and is widely used on handwritten text. Consider this image: Now we can make this font bolder using the technique called erosion: text=cv2.imread("text.jpg") # Creating kernel kernel = np.ones((5, 5), np.uint8) # Using cv2.erode() method eroded = cv2.erode(text, kernel) dilated = cv2.dilate(eroded, kernel) That’s all for this article. I’ve hopefully provided enough techniques for you to practice. Until next time! Time taken for the execution of all commands in this article: 10 loops, best of 5: 184 ms per loop — Arkaprabha-Majumdar —
https://www.codegigs.app/free-data-science-course/become-an-opencv-master/
CC-MAIN-2022-05
refinedweb
984
60.01
Lets Build A Fitness Machine Learning App and Deploy to Azure. Today I am going to share how to deploy simple Machine Learning Model in Azure and will also share the challenges i faced while deploying this python flask app in Azure. What you will learn ? You can upskill in below skills: - Create Basic REST API'susing Python. - Basics of Machine Learning. - Deploy Python REST API'sto Azure. - Setup CI/CDusing Github Actions. - Use PostManto test Your ML API. Lets get started.... Prerequisite Install Required Python Packages Open your command prompt and execute below commands: 1. py -m pip install flask 2. py -m pip install -U flask-cors 3. py -m pip install -U scikit-learn Here we are using 3 packages. flaskis used to create REST API's, which means our API can be consumed by web app , mobile app , desktop app, IOT etc. As REST API'sreturn Jsondata, so it becomes easy to Deserialize to objects of any programming language. flask-corsis used to allow request from another domian, this will be usefull if your front end is deployed in different domain. scikit-learnis used for Machine Learning. Machine Learning Fitness DataSet: For training our ML model we are using Dataset from here. Basically we are going to find ideal weight to be fit based on your height and current weight. We have trained our model to predict from 5 feet to 6 feet of male dataset, you can train for other height ranges if required. Code Walkthrough: Open VS code inside your project folder, Create app.py file in VS Code and paste below code as required. First we are importing all required modules required in our flask code. from flask import Flask, jsonify, abort, request from werkzeug.exceptions import HTTPException from flask_cors import CORS from sklearn import tree Second we are creating objects, constructors and configuration necessary for app to run. app = Flask(__name__) CORS(app) app.config["DEBUG"] = True Finally we are implementing REST API functions, which will be called when a URL is requested by client. Below is the function , which helps us to handle errors globally in our REST API. @app.errorhandler(Exception) def handle_error(e): code = 500 if isinstance(e, HTTPException): code = e.code return jsonify(error=str(e)), code Next we have our simple function named as get-message to return Hello World! Json data, when a client initiates GET requests to /api/get-message endpoint. @app.route('/api/get-message', methods=['GET']) def getMessage(): data = {'message': 'Hello World!'} return jsonify(data) Then we have our Machine Learning endpoint which predicts fitness and returns Json data, indicating you are fit or not. :) The code is explained using comments. @app.route('/api/predict-fitness', methods = ['POST']) def predict_fitness(): if not request.json or not 'userHeight' in request.json: abort(400) # Data Cleaning, remvoing dot from userHeight value userHeight = request.json.get('userHeight', "") userHeight = int(str(userHeight).replace('.', '')) userWeight = int(request.json.get('userWeight', "")) if userHeight == 51: # Just a fix because python is removing trailing zeros while coverting to str or int userHeight = 510 # Set up training data # Expected User Inputs to classifier # Example, For a person with 5 feet height, the expected weight is 43 to 53 kg # So for this we mention in features like : features = [[5, 43], [5, 99],....]] # which means for 5 feet height, user can enter values from 43 to 99 # but the expected weight will be 4353 as mentioned in labels like: labels = [4353, 4353,.....] features = [[5, 43], [5, 99], [51, 45], [51, 99], [52, 48], [52, 99], [53, 50], [53, 99], [5.4, 53], [54, 99], [55, 55], [55, 99], [56, 58], [56, 99], [57, 60], [57, 99], [58, 63], [58, 99], [59, 65], [59, 99], [510, 67], [510, 99], [511, 70], [511, 99], [6, 72], [6, 99], [6, 72], [6, 99]] # Expected output values based on user inputs or expected weight ranges based on person height labels = [4353, 4353, 4555, 4555, 4859, 4859, 5061, 5061, 5365, 5365 ,5558 ,5558, 5870, 5870, 6074,6074 , 6376, 6376 ,6580 , 6580 ,6783 , 6783, 7085, 7085, 7289, 7289, 7289, 7289] # Train classifier classifier = tree.DecisionTreeClassifier() # Decision tree classifier is used classifier = classifier.fit(features, labels) # Find common patterns in training data # Make predictions using the trained model expectedWeight = classifier.predict([[userHeight,userWeight]]) # Get first two numbers from expected Weight expectedWeight = int(expectedWeight) fromEpectedWeight = int(str(expectedWeight)[:2]) # Get last two numbers from expected Weight toExpectedWeight = int(str(expectedWeight)[2:4]) # Check if weight is in between the range of expected weight is_Weight_In_between = userWeight >= fromEpectedWeight and userWeight <= toExpectedWeight if is_Weight_In_between: message = f'Congratulations!, Your expected weight is in between {fromEpectedWeight} kg and {toExpectedWeight} kg.' else: message = f'Your expected weight should be in between {fromEpectedWeight} kg and {toExpectedWeight} kg.' fitData = { 'isFit': is_Weight_In_between, 'message': message } return jsonify( { 'fitInfo': fitData } ), 201 Finally we have below line to run the app. app.run(debug=True) Deploying Python Flask REST API to Azure: Next we are going to use a service in Azure known as Azure App Service to host our Python Flask App. Login to Azure portal. Search for Azure App Service In the Search box and select it. Next Click Create, and fill details in Basic tab, Enter Resource Group name or create new and give a name, enter your app name. In Run Time Stack select Python 3.7 , Then Click Review and Create. Just review your details and click finally Create. Then it will show you that your Deployment is Progress, it may take few minutes. Once your Deployment is done, click Go to resource which will take you to App Management blade. Next go to your Github Account and create a Repository and upload the code using Github website or using git commands or even simple, just Fork my code repository from here Note: As we are deploying to Azure so along with app.py file in our repo we need requirements.txt file which contains all required packages to be installed in Azure App Service for hosting as shown below: Flask==1.1.2 Flask-Cors==3.0.10 scikit-learn==0.24.1 Setup CI/CD with Deployment Center and Github Actions: Once you have your Repository and the code, then next we have to connect your Repository to Azure Deployment Center which provides CI/CD outofbox with providers like Github Actions, Bitbucket etc. Github as source control and authorize using your Github credentials, then select you Repository as below: In Run Time Stack select Python 3.7 Once you click Save, the deployment starts and a workflow file is created in your Github repo to start CI/CD, as you can see below in my Github repo, in GithHub Actions tab, the build is completed and the final deploy is in progress. Here the deploy Job is pushing your code changes automatically to Azure Web App as you make any changes to your code and push to your Github repo. Its that easy :) Check your deployed REST API: App Service Application URL: https://<Your app name>.azurewebsites.net/api/get-message If you are getting Hello World! as response then your API is successfully deployed. :) Test ML REST API using Post Man: Open Postman desktop app and create a new request, give a name. next we select the Http Verb as In the Body we select Raw and Json and Json data to API. Paste Below Payload in Body: { "userHeight":"6", "userWeight":"80" } Next click Send. Finally the API predicts the result and gives back the Json results as below: { "fitInfo": { "isFit": true, "message": "Congratulations!, Your expected weight is in between 72 kg and 89 kg." } } Wow we are done! 😄. Congratulations! My Initial struggle with Azure App Service Deployment: I have deployed many .NET apps to Azure but this was first time I am deploying a Python App to Azure. After following online tutorials to deploy Python Apps, the Azure Web App was down with 502 errors. After many hours of struggle Finally got a advice in Microsoft doc to check Debug console using Kudu Console or the SCM Dashboard for your app. The Link looks like below: https://<Your app name>.scm.azurewebsites.net/. From the Log Stream I came to know that Azure is using docker and containers behind the scene and the container is failing to start. Finding and Killing The Root Issue: By Digging deeper, I came to know that some process is still running in port 8000 so the port is in use and thus container is not able to start. So as advised in this SO post, lets kill the process running in port 8000, using the below Bash command. alias kill8000="fuser -k -n tcp 8000" Finally deployed Python REST API to Azure. 😄 Github Link to Code: shaijut / Python-ML-Fitness-API A Fitness Machine Learning API build using Python Flask and deployed to Azure using GitHub Actions CI/CD Python-ML Fitness API A Fitness Machine Learning API build using Python Flask and deployed to Azure using GitHub Actions CI/CD Check below Dev.to Post: PS: You can even test this API in VS Code and PostMan in your local machine without deploying to Azure. If you have any doubts let me know in comments. Hope you enjoyed. 😄. Have a great day. References: Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/shaijut/build-a-fitness-machine-learning-app-and-deploy-to-azure-24g4
CC-MAIN-2021-21
refinedweb
1,548
63.29
Documentation Using QX, IX and similar allows you to access the inputs and outputs of a supported hardware. Of course, in the case of a normal computer, this doesn't apply since it doesn't have I/O that you can control directly like you do on a PLC (normally boolean). In order to access the I/O of a hardware, you need to set the PARAMS data input to a corresponding value to select which I/O to access. 4diac FORTE should be compiled with the corresponding module, that can be selected in CMake. You find here the values to be set to PARAMS for the following supported platforms: - SysFs (Raspberry Pi, Beagle Bone Black and Similar Boards) - Raspberry-SPS - Odroid - PiFace Digital 2 - µMIC.200 - Bosch Rexroth PLC (MLPI) - Lego Mindstorms EV3 - PLC01A1 SysFs (Raspberry Pi, Beagle Bone Black and Similar Boards) The following picture shows an example application from the Blinking tutorial which is extended with IX and QX function blocks. The extended application periodically toggles on pin number 8. It can be started and stopped via pin number 2. You can connect an LED to obtain a blinking light. The PARAMS data input in the IX and QX FB define which pin should be used. The application also contains E_CYCLE and E_SWITCH function blocks (which are not in the Blinking tutorial). The first one generates an event every 500 ms. This way the IX block will check the hardware input every half second if the signal has changed. E_SWITCH_0 starts or stops the action, depending on the IN data output of the IX function block. Remember that if you have any problem and cannot find the solution in the documentation, visit the forum. Raspberry-SPS To use the digital and analog IOs of the Raspberry-SPS please use the IX and QX function blocks for one bit as well as the QW and IW function blocks for word which are provided in the io folder of the tool library. Configure your 4diac Device with the correct IP and mapp an application on it as it is illustrated within the following image. The structure of the PARAMS data input ports is busChannel.deviceNumber.portNumber. The Raspberry-SPS has two I2C bus channels. The number of the different devices is listed in the following table. The port number always starts at 0. Odroid Odroid is an embedded board that you can say it's a more powerful Rapsberry Pi, and it has analog inputs. In order to use the I/O of this board. It works similar to the Raspberry Pi, so you can use QX and IX with the number corresponding to the pin number, and to access the Analog Inputs, you should use IW. 0 and 1 should be set to PARAMS of IW in order to access the first and second analog inputs respectively. PiFace Digital 2 The PiFace 2 is a hat for the Raspberry Pi with Digital I/Os. You can control the Inputs and Outputs from 0 to 7 just using the right number, similart to the Raspberry Pi. Attention: The PiFace was not tested, so if you find a problem or want to add support to the other features of the board, please file a bug. µMIC.200 This module uses the library provided for the µMIC.200. You need the umic_dio.h, umic_relay.h and umic_led.h headers and also the umic.so library to be present in your system and accessible by the compiler. The parameters should be the type and its number separated by a dot: TYPE.NUMBER NUMBER depends on TYPE, where TYPE can be (all are in lowercase): - led: controls the LED1-4 of the µMIC.200 (Only QX can be used with led). The possible numbers for NUMBER are from 0 to 7 according to the table below: - 0 → LED1_GREEN - 1 → LED2_GREEN - 2 → LED3_GREEN - 3 → LED4_GREEN - 4 → LED1_RED - 5 → LED2_RED - 6 → LED3_RED - 7 → LED4_RED Example: led.4 → controls the LED1 color red - dio: controls the 8 digital I/O at pins 5-12. Number can be from 1 to 8. Use IX to make it input, and QX to make it output. Example: dio.3 → will access Digital I/O 3 at pin 7 of the µMIC.200 - relay: controls the relays. The possible numbers are 1 and 2. The 1 controls the NO relay and 2 the NC. Only QX can be used with relay. Example: relay.1 → will control the NC relay at pins 14- 15 Attention: To use the relay and led, in some cases you might have to change some header files in your µMIC.200. You need to find the files /usr/include/umic_led.h and /usr/include/umic_relay.h and add two things: - At the beginning, right after the first #define add://----------------------------------------------------------------------------// // take precautions if compiled with C++ compiler #ifdef __cplusplus extern "C" { #endif //----------------------------------------------------------------------------// - And at the end, right before the last #endif, add//----------------------------------------------------------------------------// #ifdef __cplusplus } #endif // end of C++ compiler wrapper //----------------------------------------------------------------------------// If you don't want to use the relay and led, or you don't want to change the header files, go to ~/org.eclipse.4diac.org/src/modules/umic/processinterface.h and set the definitions of UMIC_LED_ENABLED and UMIC_RELAY_ENABLED to zero. Bosch Rexroth PLC (MLPI) The MLPI interface was tested on a IndraControl XM22 PLC from Bosch Rexroth. In order for 4diac FORTE to access the I/O, a PLC program must be running already in the PLC. 4diac FORTE will connect to this program and send/receive information to/from it. The parameters that you need to use in PARAMS depend therefore from the names you gave to the I/Os in the PLC program. That means that you need to create your program and give them names. An example was tested where the Digital Inputs were called di01, di01, di02 and so on, and similarly with Digital Outputs (do00, do01, do02, etc). To access them from 4diac FORTE, the parameters were: - Application.UserVarGlobal.di00 → Digital Input 00 - Application.UserVarGlobal.di01 → Digital Input 01 - Application.UserVarGlobal.do00 → Digital Output 00 - ... and so on Lego Mindstorms EV3 This section is a reference of the parameters that can be used in 4diac FORTE to access the I/O of the LMSEV3. Reading this section carefully without using them could be quite boring and even useless,because the details will be forgotten if not used. We recommend to give a quick reading in order to know what can be accessed using 4diac FORTE. The example presents the control of a motor using a PID controller. Standard Access Transparent Access The standard access refers to the access through well defined I/O blocks, that allow transparency in the access of I/O in different platforms. Each block supports different parameters in the PARAM input, which defines the I/O of the system that will be accessed. The blocks used to control the I/O are named with two capital letters. The first letter indicates the direction: - Q: output - I: input The second letter indicates the size of it: - X: boolean - W: Word (16 bits) - D: Double (32 bits) So, for example a boolean input is read using the functional block IX. The output word, for example the PWM value of a motor is controlled with a QW block. Not all sizes are implemented in all platforms. Parameters for the Lego Mindstorms EV3 The parameters for the Lego Mindstorms EV3 were designed for general use, and extension of them is easy to achieve. The format of the parameters follow 2 important rules: The only capital letters allowed are the A,B,C,D when used to define the output ports of the EV3 (outA, outB, outC, outD). The parameters are given in three parts separated by points: port.typeOfIO.specific port The first part of the parameter specifies the port of the Input or Output. For the EV3, the possible values are [in1 | in2 | in3 | in4 | outA | outB | outC | outD | emb]. The first eight are directly related to the actual ports of the EV3. The 'emb' port refers to anything that can be managed in the EV3 but not in the I/O ports, until now the leds and buttons, but in the future the screen and speaker would also be reference with the 'emb' port. typeOfIO The second part, specifies what is actually connected to the specified port. The possible values are [sensor | sensorw | button | led | motor]. The difference between 'sensor' and 'sensorw' is that 'sensor' is used for boolean sensors, and 'sensorw' for sensors which values are words (16 bits). The 'led' and 'button' are used to control the embedded leds and buttons of the EV3. 'motor' is used for connected motors in any of the outputs. It must be clear that not all of the types can be used with any port. For example a led is only an output, so if used with a input, for example IX it won't intialize. Also, if used to control as a word QW, it won't initialize either. specific As its name says, this third parameter has no general form, and can also be formed of other several parameters separated by points(as in case of the led). Depending of the second parameter, the allowed values for this one are different, and it will be seen in the follow section. Specific Parameters For each possible typeOfIO, the list of possible specific part of the parameters are presented, followed by the limitations and examples. led (output only): - right.red - right.green - left.red - left.green Each specific is used to control each one of the four leds available. The orange led doesn't exist, but is the sum of red + green, so in order to turn on an orange light of the left for example, both the green and red of the left must be turned on. Limitation: 'led' must be used with QX and with the 'emb' port. Example: emb.led.right.red button (input only): - up - down - left - right - enter - backspace Reads the buttons of the EV3. Limitation: 'button' must be used with IX and with the 'emb' port. Example: emb.button.enter sensorw: - [0-N] (input only) - mode (input or output) The sensorw is used with sensors that read a word value instead of a boolean and normally are more complex than regular touch sensors (boolean). Each sensor specifies how it retrieves the data, how many values and the format of it. Specifications of the supported sensors by the ev3dev are listed here. Each one of the sensors' values is stored in a different file in the EV3. The specific [0-N] determines which file it is. For example the GYRO sensor can show the angle and the rotation speed. The values are store in value0 and value1 files, so to access just the rotation speed, the '1' specific must be used. The actual value must be checked in the link above. The 'mode' specific is used to change the mode of the sensor according to its specification. As an input, mode will retrieve the mode in which the sensor is working, and as output, the mode will be changed. The mode are mirrored using a 0 index according to the specification of the sensor. Limitation: sensorw cannot be used with boolean blocks (QX or IX). It must be used with an in[1-4] port, but is not restricted to input or output blocks when the specific "mode" is used. Limitation: the specific [0-N] can be used only as input, and the number shouldn't be greater than the allowed by the sensor in the mode that's working at the moment of initialization. Examples: Let's see an example of using the GYRO sensor specified here. The GYRO works in GYRO-ANG by default, showing only one value, then the only possible parameter is: in1.sensorw.0 (IW) If a 1 instead of a 0 were used, the initialization would have failed. If the mode of the sensor wants to be retrieve, the "mode" specific should be used with the input IW. in1.sensorw.mode (IW) Because the GYRO sensor by default is in GYRO-ANG and GYRO-ANG is the first of the list according to the link above, the FB will read a 0 (0-index from the list in the link). If then the sensor's mode is changed using a QW with "mode" specific in1.sensorw.mode (QW) and writing a value 3 (0-index mode), the sensor's mode will change to GYRO-G&A wich shows 2 values, angle and rotation speed. After changing the mode, two IW blocks with parameters - in1.sensorw.0 (IW) - in1.sensorw.1 (IW) will retrieve the both values. ATTENTION!!! To get the rotation speed, the block with parameter "in1.sensorw.1" must be initialized AFTER the mode has been changed, because before changing the mode, the file for the rotation speed doesn't exist in the system and cannot be read. sensor (input only): - [0-N] Used to control touch sensor (binary). sensor' specific is a subset of sensorw' specific. It doesn't allow mode, and even though the value is always stored in '0', the '0' specific must be used for two mainly reasons: 1. To have a standarized way of showing the parameters, and 2. to avoid problems in the future where a new sensor is stored in '1' instead of '0'. Limitations: 'sensor' must be used with QX and with an in[1-4] port. Example: in2.sensor.0 motor - enable (output boolean only) - reset (output boolean only) - stop (output word only) - pwm (input word or output word only) - position (input or output double word only) - speed (input word only) - rot (input word only) The 'enable' controls the state of the motor accroding to the OUT input of QX. A TRUE value turns ON the motor and a FALSE value turns it OFF. It must be used with QX. The 'reset' zeroes the position, pwm and speed values, and turn off the motor with a TRUE value in OUT. A FALSE value in OUT has no effect. It must be used with QX The 'stop' specific must not be confused with stoping the motor, because it specifies the way the motor behaves when PWM is zero according to the value written to OUT. It must be used with QW -. The 'pwm' reads the current pwm when using with IW of the motor or writes the desired pwm for the motor when using with QW. The 'position' reads the current position of the motor. In how many steps a rotation is divided, can be read from using the 'rot' specific. It must be used with ID. The position can be written, wich sets the position to the desired value. The 'speed' reads the current speed in positions per second. It must be used with IW. The 'rot' reads the motor specication of how many steps are in one turn of the motor. This can be read from the specifications of the motor normally and it's not necessary to add the block in an application just to read this information. It must be used with IW. Limitations: Event though each specific has its limitation regarding the type of block to use with, the typeOfIO "motor" must be used with one of the outputs of the EV3 out[A-B] Examples: outB.motor.enable (QX) outB.motor.pwm (QW writes the desired pwm, IW reads the actual pwm) outB.motor.position (ID) I/O Cheatsheet This section doesn't add any new information, but summarize which parameters and FB should be used with each I/O. Touch sensor connected to an input port. It must be used with IX in[X].sensor.[Y] where X is the number of the input in the EV3 (1-4) and Y is the value number where to read from according to the sensor. Read the specification of the sensor to be sure. Start trying from 0 if no information is available. One of the 6 buttons of the EV3. It must be used with IX emb.button.[up | down | left | right | enter | backspace] Only one of the inputs can be selected for each IX. Led of the EV3. It must be used with QX. emb.led.[right | left].[red | green] The orange led does not exist, but is the sum of green + red. Motor connected to an output port. The FB to use depends on the parameter of the motor that wants to be handled. Turn ON/OFF the motor. It must be used with QX. out[X].motor.enable Where X is the output capital letter of the EV3 (A-D, in uppercase). A TRUE value in OUT input turns the motor ON, and a FALSE value turns the motor OFF. Reset the motor variables and stop. It must be used with QX. out[X].motor.reset Where X is the output capital letter of the EV3 (A-D, in uppercase). A TRUE value in OUT resets all of the motor parameter attributes to their default values. This will also have the effect of stopping the motor. A FALSE value doesn't have any effect. Read current duty cycle (pwm) from a motor. It must be used with IW. out[X].motor.pwm Where X is the output letter of the EV3 (A-D, in uppercase). The read value is between -100 and 100. It might seem confusing reading an input from an output but even though the signal is from an output, it is an input to the system. Read the current speed of the motor. It must be used with IW. out[X].motor.speed Where X is the output letter of the EV3 (A-D, in uppercase). It reads the velocity in steps per second. How many steps are in one turn of the motor can be read with the parameter "rot" (see next). If a motor has 360 steps in one turn and the speed is 720, then the speed is 2 rotations per second. Read the number of steps in each turn. It must be used with IW. out[X].motor.rot Where X is the output letter of the EV3 (A-D, in uppercase). This value is fixed in the motor and can be found in the specifications of the motor and it normally shouldn't be used in an application. Read the current position of the motor. It must be used with ID. out[X].motor.position Where X is the output letter of the EV3 (A-D, in uppercase). If the motor was in position 0 and the "rot" parameter returns 360, then it makes 3 turns, the position will be 1080. Write the position of a motor. It must be used with QD. out[X].motor.position Where X is the output letter of the EV3 (A-D, in uppercase). Sets the current position to a specific value. Write a desired duty cycle (pwm) for a motor. It must be used with QW. out[X].motor.pwm Where X is the output letter of the EV3 (A-D, in uppercase). The value range should be between -100 and 100. Specify a desired behaviour when the pwm is set to 0. It must be used with QW. out[X].motor.stop Where X is the output letter of the EV3 (A-D, in uppercase). Possible values are 0, 1 and 2: -. sensor with values of 16 bits (not touch sensor) Read a sensor value. It must be used with IW. in[X].sensorw.[Y] where X is the number of the input in the EV3 (1-4) and Y is the value number where to read from according to the sensor. Read the specification of the sensor to be sure. Start trying from 0 if no information is available. Read a sensor's mode. It must be used with IW. in[X].sensorw.mode where X is the number of the input in the EV3 (1-4). The read value is an index number according to the list of modes of the sensor. Write a sensor's mode. It must be used with QW. in[X].sensorw.mode where X is the number of the input in the EV3 (1-4). The value to be written should be the index number according to the list of modes of the sensor. Custom Access The custom access allows the user to access part of the system that are not implemented in the standard access. Care must be taken, since critical part of the system might be accessed. For each block, the specification of the new inputs and outputs are given. fileWriter: Block to write to any file in the system. The special inputs are: FILE_NAME: absolut path of the file. Example: /sys/class/tacho-motor/motor1/command S1: String to write to the file Example: run-direct fileReader: Block to read from any file in the system. The special inputs and outpus are: FILE_NAME [input]: absolut path of the file. Example: /sys/class/tacho-motor/motor1/duty_cycle S1 [output]: String read from the file Example: 58 Where to go from here? You can see the supported protocols: Supported Communication Protocols You can see the examples: If you want to go back to the Where to Start page, we leave you here a fast access
https://www.eclipse.org/4diac/en_help.php?helppage=html/parameters/parameters.html
CC-MAIN-2020-50
refinedweb
3,568
64.61
… Books vs. Movies I am a great fan of reading. I am a great fan of movies. Sometimes these complement each other, but most times they don’t. Don’t push me! I’ve just finished reading First Blood by David Morrell. Most of my generation will recognise the title as the subtitle of the 80’s movie Rambo – First Blood (as it was released in the UK). Once again the book blows the movie away, and once again I’ve made the mistake of reading the book after the movie. In my defence the book was written before I was born, I first saw the movie over 15 years ago, and I didn’t know the book existed until recently. David Morrell’s book is an intense story of a man-hunt, survival, and revenge that is dumbed down to a mediocre action movie. The movie was notorious at the time for the ‘stitching his own arm’ sequence which simply doesn’t happen in the book. The book is bloodbath of carnage and murder with a high body count. If I remember correctly, the movie was more like an episode of the A-Team with lots of gunfire and explosions, but very little actual death1. Stifling Imagination Which brings me onto the main reason for this post… Having watched the Rambo movie as a kid, I could not help but picture Sylvester Stallone as Rambo whilst reading it, even though the description was completely different. I remember reading Jurassic Park shortly after I saw the movie and enjoying the book far more, but my mental picture of the characters and locations was permanently tainted by the movie. On the flip side, I read both Firestarter and The Stand before seeing their respective movies which then never quite matched my mental images that I’d created whilst reading. The same holds true of the Harry Potter books and movies, except that this time the movies almost matched my imagination, yet large chunks of the books had been ignored for sake of ‘pacing’. It would seem that seeing the movie before reading the book impairs my potential enjoyment of the book, yet reading the book before seeing the movie means that the movie will never live upto my expectations. A solution? I’ve not read any of the Bourne books, but I have seen The Bourne Identity and plan to see The Bourne Supremacy soon. Someday I may read the books, but for now I’m going to steer clear of them and enjoy the movies for what they are. I may do the same for I, Robot although I think I would prefer to read the book for this one… Exception to the rule There has to be an exception, and that is The Lord of the Rings. I never read the books until after I saw The Fellowship of the Ring and Peter Jackson’s vision was so close to that of the books that it helped me enjoy them more. Unfortunately, having now read the first and second books, I find the films distinctly lacking (similar to the Harry Potter films). 1 It’s been a few years since I saw it so forgive me. Ghost of site past I’ve finally gotten around to uploading the archive of Schwuk’s Playstation Page – a prior incarnation of this site. My reasons for doing this are two-fold: One day I may get around to dumping the content into another format so that it at least looks good! Interesting comparison Using. Better late than never! The title applies to more than one topic… .NET Rocks! I’ve finally gotten round to catching up on last few1.NET Rocks! shows2, having missed a bunch due to holidays/lack of bandwidth. They make a great show, but at 70-100MB per show3… Ow! .NET Rocks is really good show, and they seem to have a lot of fun doing it. I recommend it to all .NET developers. Visual C# 2005 Express Listening to show #70 prompted me to finally try out the Visual Studio 2005 Express betas – in particular Visual C# 2005 Express Edition Beta. In a nutshell, it confirms my belief that currently Mono will convert more people to .NET than to away from it because the developer tools (to a certain extent) make the language. A brief digression… I believe that “Visual Studio” is one of the best IDEs out there, and even in it its cut down ‘Express’ form blows the competition away. Of course I might feel differently if I had to pay for it instead of my employer, but if I was self-employed (again) I would probably still put my hand in my pocket for it. Can you give a higher recommendation? I think the Express products are a really good idea, and as long as Microsoft get the price right, they could net (pun intended) a lot of ‘non-professional’ developers. Of course there is always SharpDevelop for the independent .NET developer, and it is an excellent achievement but it requires a more intimate knowledge of .NET and the SDK to get the most out of it. Anyway, back on topic… Caveat Installer Installation is a big gotcha – do people not realise that not everyone has broadband yet? The initial download for C# Express is 2.34MB. Once you kick off the install, it wants to download a futher 28MB of C# Express and 24MB of .NET Framework 2.0 Beta, and also offers to download 36MB of SQL Server 2005 Express and 162MB of MSDN Documentation. Try doing that on ISDN! I cheated though… I’d already read the manual install instructions and downloaded the following: …via broadband. I’d missed the full install (30MB) of the Visual C# though… D’oh! I’ll certainly be doing it for the Visual Web Developer 2005 Express as 42MB (via web installer – 44MB for full download) is maybe a bit too much for ISDN. Then again, I’ve downloaded bigger… Initial impressions So far I’ve not dived into it too deeply, but the surface enhancements seem well thought out. My favourite so far has to be the ‘hints’ the form designer gives you when laying out controls on Windows Forms. Makes things far easier… Hello World! For my regular readers (and I do have a few!) who remember my first Mono app, here’s its .NET 2.0 counterpart: Nothing earth shattering of course, and very simple to code. Again using my PHP / NuSOAP based WSDL enabled ‘Hello’ web service4 to test interoperability and web service consumption. Visual C# Express (and of course Visual Studio) make consuming web services far easier than MonoDevelop or SharpDevelop both of which require you to use the .NET SDK tool wsdl.exe which generates the proxy class for you. Of course some (including me) may say that the wsdl tool gives you more control over your web service consumption (e.g. multiple proxies within the same namespace) but from a user-friendliness point-of-view, the ‘Visual’ products win hands down. 1 Actually eight… Oops… 2 Now added to my ‘External’ links in the right hand side bar. 3 I’ve just noticed that on the last few shows they’ve been offering low quality versions (Windows Media only though) at ~20MB – much better! They always offered a streaming version (again Windows Media only) though, but my limited bandwidth is too precious… So every so often I hijack various friends ADSL lines and grab a few to listen to at my leisure. 4 Which I will publish one of these days… Happy! VoIP Just a brief1 comment on VoIP – Nathan Torkington blogs about his First steps in VoIP. In it he mentions Skype which frequent readers (if I have any) will know I’ve used quite a bit. He wanted VoIP so that he could communicate with his family back home whilst on trips. At the moment I’ve mainly used it to communicate with family abroad instead of spending money on phone calls, and it works admirably well for this – as Skype say on their website: Better sound quality than your regular phone …and it’s true – a call to Australia sound better on Skype than they do on BT! The trouble with VoIP is too many options. As well as Skype2, there’s X-Lite (as mentioned by Nathan in his blog), Linspire have released the PhoneGaim plugin for Gaim, and of course you can use MSN Messenger for voice calls – except I could never get MSN to work with firewall, and Skype worked out of the box. By no means an exhaustive list, but a start… I think VoIP definitely has a place in the modern internet, and as broadband3 becomes more ubiquitous there will be a greater emphasis on using it. 1 Okay, it ended up not being that brief! 2 Skype have recently released version 1.0 for Microsoft Windows and Pocket PC, and 0.91 for Linux. 3 I’ve finally got a date for ADSL! 15/12/2004 – w00t! The more… MSN Web Messenger & other stories… Got this one via Pocket PC Wireless.net – it appears that Microsoft (or MSN rather) have a browser based MSN Messenger in beta. Fairly cool in itself, but to my surprise it works under Firefox on Linux – very impressive! Just goes to show you don’t need to use ActiveX to get fancy applications in a web browser. Other stories… A few updates to the site… First up is that I’ve ditched the banners that were based on photos from OpenPhoto.Net – a good resource for royalty free images if you need them – instead I’ve started using my own photos. I was browsing through them last night and decided that some of them were good enough to use, so… The banners are rotated at random – you can keep hitting refresh or look here. I’ve also added a colophon. If you’re wondering what that is: A statement found at the end of a book. A colophon may give information about the typography and printing methods used in the book or may serve as a statement of printing limitation. I’ve noticed them on various sites (notably Dunstan Orchards blog) in place of the typical “about me” or “about this site” pages and decided to put one here.
http://m.schwuk.com/articles/2004/08
crawl-002
refinedweb
1,735
69.72
Tell us what you think of the site. I’m new to python and I would like to know how i can find and read all key value and key time of each fcurve of all kFBPT_double custom properties from a marker. I try to dig into the doc but it’s kind of difficult to understand for me. If any one can help me, it would be great. I was trying as well to find all custom properties from a marker and print each custom property name but I just find the function to find the custom property by name in the doc… Any advice? Thanks Here’s a little example that should help. from pyfbsdk import * nullObj = FBFindModelByName('Null')prop = nullObj.PropertyList.Find('CustomKeyedNumber')# Get the animation node of a propertyanimNode = prop.GetAnimationNode()# The FCurve of the node has the key datafor k in animNode.FCurve.Keys: # Each key has attributes like Time, Value, Bias, Continuity, etc. print 'Key at time %s = %.3f' % (k.Time.GetTimeString(), k.Value) Stev Thanks a lot. I understand now how to read and store key value and key time of a known custom property. My problem is that I’m trying to read data from an FBX file that contains markers with lots of custom properties and I would like to find all these custom properties. As I don’t know any of these custom property names, how can I find them and get these names? My goal is to automatically find custom property and store names and keys (if there are keys) whatever is the name of the custom property. I think something like this should work: from pyfbsdk import * nullObj = FBFindModelByName('Null')# Check all propertiesfor prop in nullObj.PropertyList: # If the property isn't a user prop, skip it if not prop.IsUserProperty(): continue print 'Showing Keys for Property: %s' % prop.Name # Get the animation node of a property animNode = prop.GetAnimationNode() # The FCurve of the node has the key data for k in animNode.FCurve.Keys: # Each key has attributes like Time, Value, Bias, Continuity, etc. print 'Key at time %s = %.3f' % (k.Time.GetTimeString(), k.Value) Thanks _stev_. This is exactly what I was looking for. Now I understand how I can check if a property isn’t a custom property. I just need to add a test for any custom property type that doesn’t have a direct animation node like Color where you have to go to r g b or text that don’t have any animation. I will see if I can succeed and post the result. Thanks again!
http://area.autodesk.com/forum/autodesk-motionbuilder/python/how-to-find-key-value-and-key-time-on-kfbptdouble-custom-property/
crawl-003
refinedweb
435
73.68
Just to be clear, this post is about measuring the times at which events happen. Making things happen at specific times is a completely separate (and much harder) problem. The clock_gettime() function (under Raspbian) gives results with only microsecond resolution and also requires almost a microsecond to execute. This isn’t very helpful when trying to measure short-duration events. What we want is access to the CPU’s cycle counter, which gives closer to nanosecond resolution and has low overhead. This is easy to accomplish: static inline unsigned ccnt_read (void) { unsigned cc; asm volatile ("mrc p15, 0, %0, c15, c12, 1" : "=r" (cc)); return cc; } The problem is that if you call this code from user mode, your process will die due to executing an illegal instruction. By default, user mode does not get to read the cycle count register. To change this, we need a silly little LKM: #include <linux/module.h> #include <linux/kernel.h> /* * works for ARM1176JZ-F */ int init_module(void) { asm volatile ("mcr p15, 0, %0, c15, c9, 0\n" : : "r" (1)); printk (KERN_INFO "User-level access to CCR has been turned on.\n"); return 0; } void cleanup_module(void) { } After the insmod call, the cycle count register will be accessible. For all I know there’s a good reason why this access is disabled by default, so please think twice before using this LKM on your security-critical Raspberry Pi. (UPDATE: See pm215’s comment below, but keep in mind that if a local user wants to DOS your RPi board, there are many other ways to accomplish this.) Annoyingly, the Raspbian folks have not yet released a kernel headers package for the current kernel (3.2.27+). Also, an LKM compiled against older headers will fail to load. However, this thread contains a link to some updated headers. Here’s a tarball containing the code from this post and also a compiled LKM for the 3.2.27+ Raspbian kernel. I’m writing this up since cycle counters are awesome and also stupid problems like the missing kernel headers made it take an embarrassing amount of time for me to get this going. There’s a lot of ARM timer code out there but all of it that I found is either intended for kernel mode or else fails to work on an ARM11 chip. I actually had to read the manual to make this work. The solution here is the best one that doesn’t involve modifying the kernel. A better approach would be to expose this functionality through /proc. 4 thoughts on “High-Resolution Timing on the Raspberry Pi” My *guess* as to why this is off by default is that some people might care about preventing timing based side channels disclosures. Safe by default my be annoying, but it’s still safe. Hi bcs, that sounds reasonable. Also it looks like this bit gives access to the system validation registers, which I don’t know anything about. Setting the V bit also allows userspace to access the System Validation Operations Register. Among other amusing effects this allows any userspace process to schedule a hard reset of the CPU in N cycles time. NB also that to set the V bit in the first place you need to be in the Secure world. I assume the RPi is, but that’s not the usual configuration for most systems out there, which is probably another reason Linux doesn’t try to do anything with this register. Hi pm215, thanks. Yes, the RPi must be using the secure world since I can confirm that this code works. Adding an ioctl() to my little driver (and not setting the V bit) would be the easy way to securely expose the cycle counter to user space, but this may not end up being any faster than clock_gettime().
http://blog.regehr.org/archives/794
CC-MAIN-2016-22
refinedweb
641
70.13
IRC log of xproc on 2007-10-18 Timestamps are in UTC. 14:58:35 [RRSAgent] RRSAgent has joined #xproc 14:58:35 [RRSAgent] logging to 14:58:37 [Norm] Meeting: XML Processing Model WG 14:58:39 [Norm] Date: 18 October 2007 14:58:43 [Norm] Agenda: 14:58:45 [Norm] number Meeting: 88, T-minus 2 weeks 14:58:47 [Norm] Chair: Norm 14:58:49 [Norm] Scribe: Norm 14:58:51 [Norm] ScribeNick: Norm 15:01:16 [avernet] avernet has joined #xproc 15:01:21 [Norm] Zakim, this will be xproc 15:01:21 [Zakim] ok, Norm; I see XML_PMWG()11:00AM scheduled to start now 15:01:33 [Zakim] XML_PMWG()11:00AM has now started 15:01:40 [Zakim] +[ArborText] 15:02:15 [Zakim] +??P8 15:02:19 [avernet] zakim, ? is avernet 15:02:19 [Zakim] +avernet; got it 15:03:10 [MoZ] Zakim, what is the code ? 15:03:10 [Zakim] the conference code is 97762 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), MoZ 15:03:32 [ht] zakim, please call ht-781 15:03:32 [Zakim] ok, ht; the call is being made 15:03:34 [Zakim] +Ht 15:03:52 [ht] zakim, who is on the phone? 15:03:56 [Zakim] + +95247aaaa 15:04:00 [Zakim] On the phone I see PGrosso, avernet, Ht, +95247aaaa 15:04:03 [MoZ] Zakim, aaaa is me 15:04:12 [Zakim] +MoZ; got it 15:05:26 [Andrew] Andrew has joined #xproc 15:05:30 [ht] Norm, shall I start the call w/o you? 15:05:55 [Zakim] +??P25 15:06:02 [Andrew] zakim, ? is Andrew 15:06:03 [Zakim] +Andrew; got it 15:06:28 [MoZ] Zakim, who is on the phone? 15:06:28 [Zakim] On the phone I see PGrosso, avernet, Ht, MoZ, Andrew 15:06:38 [ht] 15:08:06 [ht] scribe: ht 15:08:12 [ht] HST: Agenda approved 15:08:27 [ht] 15:08:48 [ht] HST: Minutes approved 15:09:29 [Zakim] +Norm 15:09:30 [ht] HST: Next meeting 25 October 15:10:20 [ht] HST: NW apologies for 25 Oct and 1 Nov, HST to be in the chair _pro tem_ 15:11:38 [ht] HST: Charter has been extended for 1 year 15:11:50 [ht] PG: F2F agenda? 15:12:08 [ht] NW: one day on comment processing, one day on the future, I guess 15:12:29 [ht] PG: I find a detailed agenda helps folk to be prepared 15:12:43 [ht] NW: Well, item 1 is "Read and be familiar with the details of the issues list" 15:12:53 [ht] scribenick: Norm 15:13:13 [Norm] s/scribe: ht/scribenick: ht/ 15:13:24 [Norm] Topic: Charter extension 15:13:27 [Norm] Extended 1 yaer. 15:13:33 [Norm] s/yaer/year/ 15:14:06 [Norm] Topic: Review of action items 15:14:36 [Norm] A-86-01: Alex to review XSLT streaming requirements before the face-to-face. 15:14:41 [Norm] Continued 15:14:48 [Norm] A-86-03: Henry to reply to the commenter (non-string parameters; issue 30) 15:14:52 [Norm] Completed 15:14:58 [Norm] A-86-04: Henry to craft the prose to cover the defaulted output case 15:15:17 [Norm] Continued 15:15:22 [Norm] A-87-01: Norm to take a stab at reconsidering the default inputs feature applying it only to ports that are not primary 15:15:25 [Norm] Continued. 15:15:30 [Norm] A-87-02: Alex to propose some text about imports and circularity 15:15:34 [Norm] Continued 15:15:39 [Norm] A-87-03: Norm to attempt to incorporate Richard's draft text about step type scope 15:15:42 [Norm] Continued. 15:16:10 [Norm] Topic: Comment 29: Determining whether a pipeline has a (defaulted) output 15:16:10 [Norm] -> 15:16:22 [Norm] Continued pending Henry's action 15:16:37 [Norm] Topic: Comment 6: Bindings for pipeline inputs 15:16:37 [Norm] -> 15:17:02 [Norm] Continued pending Norm's action 15:17:08 [Norm] Topic: Comment 18: Scope of step types 15:17:08 [Norm] -> 15:17:15 [Norm] Continued pending Norm's action 15:17:22 [Norm] Topic: Comment 24: Passing PSVIs between steps 15:17:22 [Norm] -> 15:19:16 [Norm] Norm outlines the issue and summarizes Jeni's observed options 15:19:58 [Norm] Henry: I think we leave it impl. defined. 15:20:53 [Norm] Some discussion of how defaulted attributes fit in: probably covered by the spirit of the best efforts clause. 15:21:43 [Norm] Proposal: Leave it implementation-defined. 15:21:44 [Norm] Accepted. 15:22:21 [Norm] ACTION: Henry will respond to the commenter on comment 24: passing PSVIs between steps. 15:22:30 [Norm] Topic: Comment 7: Saxonica comments on sections 1 and 2 15:22:31 [Norm] -> 15:23:22 [Norm] Norm reviews the points in Mike's message. 15:23:27 [Norm] Point 1: accepted. 15:23:55 [Norm] Point 2: changed validate-* to validate-with-*; accepted. 15:25:13 [MoZ] 15:26:38 [Norm] Point 3-6 are editorial. 15:26:47 [Norm] Point 7: rejected, also made a separate issue 15:27:32 [Norm] Point 8: discussion 15:27:51 [Norm] Henry: I think we're pretty clear that we're not answering this question. 15:27:56 [Norm] ...We're trying not to be too precise. 15:29:30 [Norm] 15:30:38 [Norm] Henry: No, that doesn't work. An XML document isn't an Infoset, the infoset is just a set of terms 15:31:26 [Norm] Henry: But maybe it's ok. 15:31:30 [Norm] ...Sure let's try this. 15:31:36 [Norm] Norm: What about A.3? 15:31:41 [Norm] Henry: Yes, that looks fine too. 15:32:14 [Norm] Norm: Anyone unhappy with the editor's resolution of Mike's point 8? 15:32:16 [Norm] None heard. 15:32:52 [Norm] Point 9: 15:32:53 [Norm] 15:34:03 [Norm] Norm: Anyone unhappy with the editor's resolution of Mike's point 9? 15:34:10 [Norm] None heard. 15:34:22 [Norm] That leaves some editorial clarifications, but I think we've covered the technical issues 15:34:42 [Norm] Topic: Comment 12: Saxonica comments on sections 3 and 4 15:34:42 [Norm] -> 15:35:11 [Norm] 15:35:54 [Norm] Let's leave the "editorial" and "clarification" points to the editor unless they turn out not to be 15:36:00 [Norm] Henry: I'm prepared to skip to 7. 15:37:11 [Norm] Henry: I think he's right. 15:37:19 [Norm] Norm: I don't think these apply to the things *inside* the elements in the step 15:37:40 [Norm] Henry: So you meant "children" not "contains"? If so, you'll have to repeat it endlessly. 15:37:43 [Norm] Norm: Uhm... 15:38:12 [Norm] Henry: Can't we just say that these rules don't apply inside p:inline? 15:38:17 [Norm] Norm: Perhaps 15:39:42 [Norm] Some discussion about what constitutes a step. 15:40:10 [Norm] Norm: I think children would work. 15:40:38 [Norm] Henry: What about giving the fifth and sixth bullets a parallel construction 15:40:51 [Norm] Henry: If any element in the XProc namespace other than p:inline, or any step, has text node children... 15:40:56 [Norm] Norm: Sure, that would work for me. 15:41:11 [Norm] Norm: I think that resolves point 7. 15:41:35 [Norm] Point 8: 15:42:59 [Norm] Henry: I think replacing "within its container" by "immediately contained by that steps container" 15:43:39 [Norm] ...Or if we've formally defined subpipline as the immediately contained steps, then "the last step in document order in the subpipeline" 15:43:51 [Norm] Norm: I'm happy to attempt to clarify that. 15:44:05 [Norm] Point 11: 15:46:30 [Norm] Some of this is editorail. 15:46:48 [Norm] Norm: I think we have clarified that select only selects elements or documents. 15:47:07 [Norm] Henry: I've always said that select needs the same namespace fixup we already described. 15:48:41 [Norm] Henry: We've already re-worded 4.2 so there isn't a double "each" anymore. 15:49:08 [Norm] Norm: But that leaves "wrapped" and an explicit pointer to 2.6.1 15:50:32 [Norm] Norm: The select question points to 5.2 and aon through to p:input where it's covered. 15:51:41 [Norm] Henry: No, I don't think we want to make the reference to 2.6.1 explicit. 15:51:58 [Norm] ACTION: Norm to see if the commenter agree's we've addressed his concerns. 15:52:41 [Norm] Point 12: 15:52:51 [Norm] Henry: I think we can replace "aggregated" with "concatentated" here. 15:53:01 [Norm] Norm: I'm willing to do that and see if it helps. 15:53:39 [Norm] Henry: The prose is still a bit terse. 15:54:03 [Norm] ...I think we should unpack it and make it more explicit. 15:54:05 [Norm] Norm: Ok. 15:54:20 [Norm] Point 13: 15:55:35 [Norm] Norm: We probably need to make the distinction between match and select more clear. 15:55:57 [Norm] Henry: Not that you need to make the point about match only matching element or document nodes here. There's no free ride. 15:57:09 [Norm] Norm: The WG's intent is clear but the prose needs to be clearer. 15:57:46 [Norm] Norm: Should we make it an error? 15:57:55 [Norm] s/error/error to select a document node/ 15:57:58 [Norm] Henry: Yes. 15:58:03 [Norm] Accepted. 15:58:30 [MoZ] MoZ has joined #xproc 15:58:56 [Norm] Point 14: 15:59:02 [Norm] Norm: I think it can be a static error. 15:59:53 [Norm] Henry: Is this just a case where a processor could detect it statically if it wanted to? 16:00:33 [Norm] Norm: It could, but why not make it static? 16:00:57 [Norm] Henry: A select option to a step may be computed, so it has to be a dynamic error. 16:01:26 [Norm] Norm: But for select on our compound steps and for test on when, then they should be static errors. 16:02:00 [Norm] Norm: Any objections? 16:02:04 [Norm] Accepted. 16:02:09 [Norm] Topic: Any other business 16:02:16 [Norm] Adjourned. 16:02:30 [Zakim] -avernet 16:02:55 [Zakim] -PGrosso 16:03:09 [Zakim] -Andrew 16:04:55 [PGrosso] PGrosso has left #xproc 16:05:05 [Zakim] -Ht 16:05:09 [Zakim] -Norm 16:05:11 [Zakim] -MoZ 16:05:15 [Zakim] XML_PMWG()11:00AM has ended 16:05:17 [Zakim] Attendees were PGrosso, avernet, Ht, +95247aaaa, MoZ, Andrew, Norm 16:06:13 [Norm] RRSAgent, set logs world visible 16:06:13 [RRSAgent] I'm logging. I don't understand 'set logs world visible', Norm. Try /msg RRSAgent help 16:06:18 [Norm] RRSAgent, set logs world-visible 16:06:44 [Norm] RRSAgent, draft minutes 16:06:44 [RRSAgent] I have made the request to generate Norm 16:07:21 [Norm] RRSAgent, set logs world-visible 16:51:18 [MSM] MSM has joined #xproc 17:27:43 [Zakim] Zakim has left #xproc 17:28:41 [Norm] RRSAgent, bye 17:28:41 [RRSAgent] I see 2 open action items saved in : 17:28:41 [RRSAgent] ACTION: Henry will respond to the commenter on comment 24: passing PSVIs between steps. [1] 17:28:41 [RRSAgent] recorded in 17:28:41 [RRSAgent] ACTION: Norm to see if the commenter agree's we've addressed his concerns. [2] 17:28:41 [RRSAgent] recorded in
http://www.w3.org/2007/10/18-xproc-irc
CC-MAIN-2014-10
refinedweb
2,004
69.41
MBRLEN(3P) POSIX Programmer's Manual MBRLEN(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. mbrlen — get number of bytes in a character (restartable) #include <wchar.h> size_t mbrlenlen(). The behavior of this function is affected by the LC_CTYPE category of the current locale. The mbrlen() function need not be thread-safe if called with a NULL ps argument. The mbrlen() function shall not change the setting of errno if successful.)−2. In this case, [EILSEQ] shall be stored in errno and the conversion state is undefined. The mbrlen() function shall fail if: EILSEQ An invalid character sequence is detected. The mbrlen() function may fail if: EINVAL ps points to an object that contains an invalid conversion state. The following sections are informative. None. None. None. None. mbsinit(3p), mbrtowLEN(3P) Pages that refer to this page: wchar.h(0p), mbsinit(3p)
http://man7.org/linux/man-pages/man3/mbrlen.3p.html
CC-MAIN-2019-22
refinedweb
174
57.77
The last type of loop to look at in this tutorial is the for loop. This type of loop executes a set number of times and maintains its own counter. To define a for loop you need the following information: 1) A starting value to initialize the counter variable. 2) A condition for continuing the loop, involving the counter variable. 3) An operation to perform on the counter variable at the end of each loop cycle. For example, if you want a loop with a counter that increments from 1 to 10 in steps of one, then the starting value is 1; the condition is that the counter is less than or equal to 10; and the operation to perform at the end of each cycle is to add 1 to the counter. This information must be placed into the structure of a for loop as follows: Syntax for ( < initialization > ; < condition > ; < operation > ) { < code to loop > } Example using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { int i; for (i = 1; i <= 10; ++i) { Console.WriteLine("{0}", i); } } } }
https://www.loopandbreak.com/for-loop-in-c/
CC-MAIN-2021-25
refinedweb
189
61.46
import "github.com/golang/go/src/encoding/gob". A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types. Pointers are not transmitted, but the things they point to are transmitted; that is, the values are flattened. Nil pointers are not permitted, as they have no value. Recursive types work fine, but recursive values (data with cycles) are problematic. This may change. To use gobs, create an Encoder and present it with a series of data items as values or addresses that can be dereferenced to values. The Encoder makes sure all type information is sent before it is needed. At the receive side, a Decoder retrieves values from the encoded stream and unpacks them into local variables. The source and destination values/types need not correspond exactly. For structs, fields (identified by name) that are in the source but absent from the receiving variable will be ignored. Fields that are in the receiving variable but missing from the transmitted type or value will be ignored in the destination. If a field with the same name is present in both, their types must be compatible. Both the receiver and transmitter will do all necessary indirection and dereferencing to convert between gobs and actual Go values. For instance, a gob type that is schematically, struct { A, B int } can be sent from or received into any of these Go types: struct { A, B int } // the same *struct { A, B int } // extra indirection of the struct struct { *A, **B int } // extra indirection of the fields struct { A, B int64 } // different concrete value type; see below It may also be received into any of these: struct { A, B int } // the same struct { B, A int } // ordering doesn't matter; matching is by name struct { A, B, C int } // extra field (C) ignored struct { B int } // missing field (A) ignored; data will be dropped struct { B, C int } // missing field (A) ignored; extra field (C) ignored. Attempting to receive into these types will draw a decode error: struct { A int; B uint } // change of signedness for B struct { A int; B float } // change of type for B struct { } // no field names in common struct { C, D int } // no field names in common Integers are transmitted two ways: arbitrary precision signed integers or arbitrary precision unsigned integers. There is no int8, int16 etc. discrimination in the gob format; there are only signed and unsigned integers. As described below, the transmitter sends the value in a variable-length encoding; the receiver accepts the value and stores it in the destination variable. Floating-point numbers are always sent using IEEE-754 64-bit precision (see below). Signed integers may be received into any signed integer variable: int, int16, etc.; unsigned integers may be received into any unsigned integer variable; and floating point values may be received into any floating point variable. However, the destination variable must be able to represent the value or the decode operation will fail. Structs, arrays and slices are also supported. Structs encode and decode only exported fields. Strings and arrays of bytes are supported with a special, efficient representation (see below). When a slice is decoded, if the existing slice has capacity the slice will be extended in place; if not, a new array is allocated. Regardless, the length of the resulting slice reports the number of elements decoded. level will fail. A struct field of chan or func type is treated exactly like an unexported field and is ignored. Gob can encode a value of any type implementing the GobEncoder or encoding.BinaryMarshaler interfaces by calling the corresponding method, in that order of preference. Gob can decode a value of any type implementing the GobDecoder or encoding.BinaryUnmarshaler interfaces by calling the corresponding method, again in that order of preference. This section documents the encoding, details that are not important for most users. Details are presented bottom-up. An unsigned integer is sent one of two ways. If it is less than 128, it is sent as a byte with that value. Otherwise it is sent as a minimal-length big-endian (high byte first) byte stream holding the value, preceded by one byte holding the byte count, negated. Thus 0 is transmitted as (00), 7 is transmitted as (07) and 256 is transmitted as (FE 01 00). A boolean is encoded within an unsigned integer: 0 for false, 1 for true. A signed integer, i, is encoded within an unsigned integer, u. Within u, bits 1 upward contain the value; bit 0 says whether they should be complemented upon receipt. The encode algorithm looks like this: var u uint if i < 0 { u = (^uint(i) << 1) | 1 // complement i, bit 0 is 1 } else { u = (uint(i) << 1) // do not complement i, bit 0 is 0 } encodeUnsigned(u) The low bit is therefore analogous to a sign bit, but making it the complement bit instead guarantees that the largest negative integer is not a special case. For example, -129=^128=(^256>>1) encodes as (FE 01 01). Floating-point numbers are always sent as a representation of a float64 value. That value is converted to a uint64 using math.Float64bits. The uint64 is then byte-reversed and sent as a regular unsigned integer. The byte-reversal means the exponent and high-precision part of the mantissa go first. Since the low bits are often zero, this can save encoding bytes. For instance, 17.0 is encoded in only three bytes (FE 31 40). Strings and slices of bytes are sent as an unsigned count followed by that many uninterpreted bytes of the value. All other slices and arrays are sent as an unsigned count followed by that many elements using the standard gob encoding for their type, recursively. Maps are sent as an unsigned count followed by that many key, element pairs. Empty but non-nil maps are sent, so if the), it is omitted from the transmission. The field number is defined by the type of the encoded struct: the first field of the encoded type is field 0, the second is field 1, etc. When encoding a value, the field numbers are delta encoded for efficiency and the fields are always sent in order of increasing field number; the deltas are therefore unsigned. The initialization for the delta encoding sets the field number to -1, so an unsigned integer field 0 with value 7 is transmitted as unsigned delta = 1, unsigned value = 7 or (01 07). Finally, after all the fields have been sent a terminating mark denotes the end of the struct. That mark is a delta=0 value, which has representation (00). Interface types are not checked for compatibility; all interface types are treated, for transmission, as members of a single "interface" type, analogous to int or []byte - in effect they're all treated as interface{}.. (A nil interface value is identified by the empty string and transmits no value.) Upon receipt, the decoder verifies that the unpacked concrete item satisfies the interface of the receiving variable.. The representation of types is described below. When a type is defined on a given connection between an Encoder and Decoder, it is assigned a signed integer type id. When Encoder.Encode(v) is called, it makes sure there is an id assigned for the type of v and all its elements and then it sends the pair (typeid, encoded-v) where typeid is the type id of the encoded type of v and encoded-v is the gob encoding of the value v. To define a type, the encoder chooses an unused, positive type id and sends the pair (-type id, encoded-type) where encoded-type is the gob encoding of a wireType description, constructed from these types: type wireType struct { ArrayT *ArrayType SliceT *SliceType StructT *StructType MapT *MapType GobEncoderT *gobEncoderType BinaryMarshalerT *gobEncoderType TextMarshalerT *gobEncoderType } type arrayType struct { CommonType Elem typeId Len int } type CommonType struct { Name string // the name of the struct type Id int // the id of the type, repeated so it's inside the type } type sliceType struct { CommonType Elem typeId } type structType struct { CommonType Field []*fieldType // the fields of the struct. } type fieldType struct { Name string // the name of the field. Id int // the type id of the field, which must be already defined } type mapType struct { CommonType Key typeId Elem typeId } type gobEncoderType struct { CommonType } If there are nested type ids, the types for all inner type ids must be defined before the top-level type id is used to describe an encoded-v. For simplicity in setup, the connection is defined to understand these types a priori, as well as the basic gob types int, uint, etc. Their ids are: bool 1 int 2 uint 3 float 4 []byte 5 string 6 complex 7 interface 8 // gap for reserved ids. WireType 16 ArrayType 17 CommonType 18 SliceType 19 StructType 20 FieldType 21 // 22 is slice of fieldType. MapType 23 Finally, each message created by a call to Encode is preceded by an encoded unsigned integer count of the number of bytes remaining in the message. After the initial type name, interface values are wrapped the same way; in effect, the interface value acts like a recursive invocation of Encode. In summary, a gob stream looks like (byteCount (-type id, encoding of a wireType)* (type id, encoding of a value))* where * signifies zero or more repetitions and the type id of a value must be predefined or be defined before the value in the stream.: This example shows the basic usage of the package: Create an encoder, transmit some values, receive them with a decoder. Code:) } This example transmits a value that implements the custom encoding and decoding methods. Code:) } This example shows how to encode an interface value. The key distinction from regular types is to register the concrete type that implements the interface. Code: } dec_helpers.go decode.go decoder.go doc.go enc_helpers.go encode.go encoder.go error.go type.go Register records a type, identified by a value for that type, under its internal type name. That name will identify the concrete type of a value sent or received as an interface variable. Only types that will be transferred as implementations of interface values need to be registered. Expecting to be used only during initialization, it panics if the mapping between types and names is not a bijection. RegisterName is like Register but uses the provided name rather than the type's default. CommonType holds elements of all types. It is a historical artifact, kept for binary compatibility and exported only for the benefit of the package's encoding of type descriptors. It is not intended for direct use by clients. A. NewDecoder returns a new decoder that reads from the io.Reader. If r does not also implement io.ByteReader, it will be wrapped in a bufio.Reader. Decode. DecodeValue reads the next value from the input stream. If v is the zero reflect.Value (v.Kind() == Invalid), DecodeValue discards the value. Otherwise, it stores the value into v. In that case, v must represent a non-nil pointer to data or be an assignable reflect.Value (v.CanSet()) If the input is at EOF, DecodeValue returns io.EOF and does not modify v. An Encoder manages the transmission of type and data information to the other side of a connection. It is safe for concurrent use by multiple goroutines. NewEncoder returns a new encoder that will transmit on the io.Writer.. type GobDecoder interface { // GobDecode overwrites the receiver, which must be a pointer, // with the value represented by the byte slice, which was written // by GobEncode, usually for the same concrete type. GobDecode([]byte) error } GobDecoder is the interface describing data that provides its own routine for decoding transmitted values sent by a GobEncoder. type GobEncoder interface { // GobEncode returns a byte slice representing the encoding of the // receiver for transmission to a GobDecoder, usually of the same // concrete type. GobEncode() ([]byte, error) } GobEncoder is the interface describing data that provides its own representation for encoding values for transmission to a GobDecoder. A type that implements GobEncoder and GobDecoder has complete control over the representation of its data and may therefore contain things such as private fields, channels, and functions, which are not usually transmissible in gob streams. Note: Since gobs can be stored permanently, it is good design to guarantee the encoding used by a GobEncoder is stable as the software evolves. For instance, it might make sense for GobEncode to include a version number in the encoding. Package gob imports 14 packages (graph). Updated 2019-09-24. Refresh now. Tools for package owners.
https://godoc.org/github.com/golang/go/src/encoding/gob
CC-MAIN-2019-51
refinedweb
2,127
60.95
Industry is rapidly moving to adopt Hadoop 2.x. With every upgrade process — especially one that is so big in nature — there is a level of complexity involved. Qubole has already started offering a beta service to our customers. Our customers have started to try out Hadoop 2 as well, and as with any transition, this one is not without its own quirks as well. One of our customers, who run their major workloads on Qubole, wanted to try out Hadoop 2, and for this experiment they wanted to transfer a part of their data that is stored in HDFS clusters managed by Qubole (which is HDFS 1) to Hadoop 2. The Hadoop community has done tremendous work in simplifying the migration from hadoop 1 clusters to hadoop2 clusters but there is no support to make sure that the clusters can interact with each other. This becomes very important during the transition process when the two systems are running in parallel and splitting workloads. We had to figure out a scalable way to migrate data from HDFS 1 to HDFS 2 reliably. HFTP with DistCp DistCp is a popular tool in Hadoop to transfer data from one location to another. It uses MapReduce to parallelize the file copy operations and is ideal for any Hadoop compatible file system. One can use a combination of HDFS to HDFS or HDFS to S3 etc. all of which have compatible file system implementations in Hadoop. To transfer data between HDFS 1 and HDFS 2, an HTTP based file system called HFTP which reads data using HTTP. Note that HFTP can only perform read operations. Writes are not supported. In this particular use case, data is generated on HDFS 1 clusters, and has to be immediately pushed to HDFS 2 clusters. The other issue is that this would require a map reduce application running on HDFS 2, but the cluster was already running different applications due to which starting a mapreduce cluster would require more resources and maintenance. Solution To solve this, we have to create a HDFS 2 compatible file system in Hadoop 1. To see how we did it, lets discuss how file system is implemented in Hadoop. There is a base class called FileSystem, and all Hadoop compatible file systems override this class. Once overridden, one can point a specific URI to a specific class in the Hadoop configuration. For example: <property> <name>fs.hdfs.impl</name> <value>org.apache.hadoop.fs.hdfs.DistributedFileSystem</value> </property> The above code will point all URIs of the form hdfs://… to use the DistributedFileSystem class. Thus the solution is to add the Hadoop 2 jars in Hadoop 1 classpath, and then point another URI (example hdfs2) to the corresponding Hadoop 2 file system class. It seems straight forward but there are two problems: - The fully classified class names are conflicting in hadoop1 and hadoop2. There are many classes (eg org.apache.hadoop.fs.hdfs.DistributedFileSystem) which have name package paths, but have different implementations. - Even if the class names were separated, the file system implementations for hdfs 1 and hdfs 2 are entirely different which requires multiple dependencies in the respective hadoop systems. We solved these using maven shading. Using shading, we renamed the Hadoop 2 packages from org.apache.hadoop.fs.hdfs.* to qubole.org.apache.hadoop.fs.hdfs.* into an uber jar which allowed us to place Hadoop 2 classes in Hadoop 1 classpath. The only thing remaining now was to somehow call Hadoop 2 DFS implementation using Hadoop 1 FileSystem class. For that, we created a new class overriding Hadoop 1 FileSystem and called functions of Hadoop 2 DistributedFileSystem inside that. A small code snippet is mentioned below: // Inside Hadoop1 public class Hdfs2FileSystem extends FileSystem { // HDFS1 FileSystem qubole.org.apache.hadoop.hdfs.DistributedFileSystem dfs; public void write(byte[] data) { dfs.write(data); } public byte[] read(int offset) { return dfs.read(offset); } // And so on. } The final thing we did was to add another configuration parameter: <property> <name>fs.hdfs2.impl</name> <value>org.apache.hadoop.fs.Hdfs2FileSystem</value> </property> Using this, we were easily able to transfer files from HDFS 1 to HDFS 2 using the following command: distcp hdfs://hdfs1_namenode_address/<src> hdfs2://hdfs2_namenode_address/<dest> The power of maven shading and the simple way to override the FileSystem APIs resulted in a reliable solution for interaction between hdfs 1 and hdfs 2 at scale, and has been extremely useful in the transition from hadoop 1 to hadoop 2.
https://www.qubole.com/blog/bridging-hdfs2-with-hdfs1/
CC-MAIN-2019-18
refinedweb
747
53.41
Wrapping command-line programs, part III In the second article in this series I showed how to use OpenEye's mol2nam program as coprocess from Python. To make it work I had to edit the original source code to add an fflush after writing the name. Otherwise the output was buffered and inaccessible to the Python wrapper. Sadly, programs in this field rarely come with recompilable source code. What could be done if I couldn't add the fflush? When the C stdio library initializes stdout it checks if the output is a terminal. If so it sets the output mode to line buffered. Otherwise it is put into block mode. A console window is a terminal but files and pipes are not. Not many people use actual terminals these days ("tty" is short for "teletypewriter"). Instead they are emulated using what are called pseudo-ttys. We can create and use our own pty to communicate with the original OpenEye mol2nam in line buffered mode. This gets into an aspect of Unix that I don't know well. There's a 30+ year history of terminal control that I've never had to worry and was never interested in learning. I have only vague ideas of what ioctl, fcntl and tcgetattr/tcsetattr do. What I'm about to describe works, but there may be ways to make it work better. Please let me know if there's a better way. Instead what I do is let someone else provide a higher-level interface to the terminal control functions. Pexpect is a Python library influenced by Don Libes' venerable Expect package. It opens a pty connected to a process, sets the terminal modes correctly, provides a Python file-like interface, and a few bits of extra functionality. For more details you should read the documentation and scan the source code. Using pexpect is quite simple, except when aspects of the archaic, baroque pty interface appear. The main interface is the spawn class, which takes the command to run and an optional timeout. The newly created instance implements file-like method, so you can still read, readline and write to the interface. Here's the code to connect to mol2nam and skip the three lines of the header.One difference between this and the subprocess interface is that everything is communicated over a single bidirectional connection. There is no difference between stdin, stdout and stderr. That's why the previous code snipped used only readline() and didn't specify which input to read from. By default terminals echo the input so something written to the spawn instance will also be read. When I write the SMILES line to the process I need to skip the echoed response, like this mol2nam = self._get_mol2nam() mol2nam.write(smiles + "\n") mol2nam.readline() # skip echoed line line = mol2nam.readline()The spawn class has a setecho method which I hoped would prevent the echoing. When I toggled it I no longer got output from mol2nam. I don't know why. I wrote a simple program that implements enough of the mol2nam protocol to pass the self test but was not able to make it reproduce the problem. Did I mention I don't like working with ptys? The readline method gets both the systematic name sent to stdout and the error messages sent to stderr. Luckily they are easy to distinguish because "Warning:" is not in any systematic name. As before, if there is an error I want to restart the connection. I can't simply close the input stream then read from the output stream because they are the same connection. If I close one then I close the other. Instead I need to use a pexpect method called sendeof which tells mol2nam that there is no more input. The relevant code is:()Bear in mind that lines coming from a pty end with "\r\n" and not just the "\n" character. This is an aspect of using an API designed to support typewriter printer carriages. If the last line had been return line[:-1] then it would have the extra "\r" character. The rstrip() method removes all whitespace on the right so works just fine. With these in place I ran the regression test code. It didn't pass. The problem was with the very long SMILES string meant to force mol2nam to give a segmentation fault. I couldn't figure out what was going on so I finally wrote a new program that implements the mol2nam API and should be able to pass the regression test. This let me watch what was going on with that side of the connection. Because the stdout and stderr went back to the wrapper code you can see I opened up /dev/ttyp4 and wrote output there. I had another terminal window open and from the tty command I knew it was using that pseudo-tty handle. (In unix nearly all I/O is a file.) By writing to that file the output goes to that terminal's display. An advantage is that try/except block around the call to main(). That let me see the exceptions during testing which otherwise would have been put somewhere in the pexpect interface. Here's my test code. I open the debug file in unbuffered mode used the "-u" option on the #! line to tell Python to use unbuffered stdin and stdout. The function _W constructs an appropriate "Warning:" message that will pass the smi2name test code. #!/usr/bin/python -u import sys debugf = open("/dev/ttyp4", "w", 0) debugf.write("Starting\n") def _W(s, i): lines = ["Warning:", "Warning: " + s] if i: lines.append("Warning:" + (" "*i) + "^\n") return "\n".join(lines) + "\n" answers = { "C": "methane", "U": _W("", 1), "CC1": _W("Unclosed ring.", 3), "LONG": _W("Unclosed branch.", None), "CCCC(C": _W("Unclosed branch.", 6), "CCCCCC)C": _W("", 7), "[U]": "uranium", "C1CC23CC4CC3C1C(C2)CC4": "BLAH", "C#N": "hydrogen cyanide", } def main(): print "header line 1" print "header line 2" print "header line 3" while 1: line = sys.stdin.readline() print >>debugf, "Got line", repr(line) if not line: break if len(line) > 80: line = "LONG" print answers[line.rstrip()] try: main() except: import traceback traceback.print_exc(file=debugf) When I used this I found that the long SMILES was never getting to the coprocess, though the short SMILES strings were. Through experimentation I found that if the SMILES was 1000 characters or smaller then it would be sent to the coprocess but anything longer caused problems. When I tried to write a long SMILES I found that I got several chr(7) bytes if I read from the spawn interface. ASCII character 7 is for BEL, which should ring the terminal bell. This strongly suggests the terminal is buffering. Terminals have two major modes; cooked and raw. When you type something on the command-line you expect to be able two^H^Ho edit the line before pressing enter. Various characters get treated as editing characters, like backspace (which is often either ASCII 8/^H) or ASCII 127/^?) and "kill line" (ASCII 21/^U). You also expect control-C (ASCII 3) to kill a process and control-Z (ASCII 26) to suspend it. When the terminal supports these conversions it is in cooked mode because it is processing the input. Otherwise it is in raw mode. Programs like vi and emacs use raw mode to capture each character as its pressed and to change the meaning of things like control-C. To test if this was the case I used the test string "CC"+chr(21)+"S". In cooked mode the special character in the middle kills the line; it erases everything before it on the input line. If the terminal is in cooked mode then the result should be "hydrogen sulfide". And indeed it is. I also tried using chr(8) for backspace but had to switch to chr(127) which is what the terminal actually uses. The "stty -a" command lists all of the special characters. cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = <undef>; stop = ^S; susp = ^Z; time = 0; werase = ^W;The backspace worked as did chr(3) for control-C and chr(26) for control-Z. The problem is we're in cooked mode so the terminal saves a 1000 byte buffer to allow for editing. I want to switch into raw mode but I don't know how. When I try "import tty" then "tty.setraw(mol2nam.child_fd)" then the interface just hangs. Like I said, I don't know the details of ptys well enough. Luckily for me I don't need know them. For this interface it's okay to limit the SMILES string to no more than 1,000 characters and to prohibit anything other than the printable ASCII characters. I mentioned in the first essay of this current series that I don't like checking for incorrect data at this level of the API. The exception is for cases like this where bad input can cascade and have big or unexpected problems. Here's the code for checking for these cases. # Check for characters other than printable ASCII _unexpected_char_pat = re.compile(r"[^\040-\0176]") ... def smi2name(self, smiles): """convert a SMILES string into an IUPAC name""" if smiles == "": return "vacuum" elif "\n" in smiles: raise NamingError("Newline not allowed in SMILES") elif len(smiles) > 1000: raise NamingError("SMILES too long") m = _unexpected_char_pat.search(smiles) if m: raise NamingError("Unexpected character at position %d" % (m.start(0)+1,)) ... ...and the test cases for them. Note that the error message for the long SMILES string has changed. def test(): for smi, name, errmsg in ( ("C", "methane", None), ("C"+chr(127)+"S", None, "Unexpected character at position 2"), ("CC"+chr(3), None, "Unexpected character at position 3"), ("S", "hydrogen sulfide", None), ... ("C"*32764 + "(C)", None, "SMILES too long"), ("C"*1000 , "kiliane", None), ("C"*1001 , None, "SMILES too long"),Hmm, "\n" is also a special character so I can remove the special test for ""Newline not allowed in SMILES". Here's the version of the wrapper code that use pseudo-ttys through the pexpect library import os, re import pexpect MOL2NAM = os.path.join(os.environ["OE_DIR"], "bin", ): errmsg = "Cannot parse SMILES" if "\nWarning: Unclosed branch." in text: errmsg = "Unclosed branch" elif "\nWarning: Unclosed ring." in text: errmsg = "Unclosed ring" m = _error_pos_pat.search(text) if m: errpos = len(m.group(1)) + 1 errmsg = errmsg + " at position %d" % errpos return errmsg class Smi2Name: def __init__(self, executable = None): # a pexpect.spawn connected to mol2nam self._mol2nam = None if executable is None: executable = MOL2NAM self.executable = executable def smi2name(self, smiles): """convert a SMILES string into an IUPAC name""" if smiles == "": return "vacuum" elif len(smiles) > 1000: raise NamingError("SMILES too long") m = _unexpected_char_pat.search(smiles) if m: raise NamingError("Unexpected character at position %d" % (m.start(0)+1,)) mol2nam = self._get_mol2nam() mol2nam.write(smiles + "\n") mol2nam.readline() # skip echoed line line = mol2nam.readline()() # Defer instantiation of the wrapper until it's needed. # This lets other code change MOL2NAM if needed, but changes # will only work if done before calling this function. _smi2name = None def smi2name(smiles): """convert a SMILES string into an IUPAC name""" global _smi2name if _smi2name is None: _smi2name = Smi2Name().smi2name return _smi2name(smiles) def test(): for smi, name, errmsg in ( ("C", "methane", None), ("C"+chr(127)+"S", None, "Unexpected character at position 2"), ("CC"+chr(3), None, "Unexpected character at position 3"), ("S", "hydrogen sulfide", None), ("U", None, "Cannot parse SMILES at position 1"), ("CC1", None, "Unclosed ring at position 3"), ("C", "methane", None), ("C"*32764 + "(C)", None, "SMILES too long"), ("C"*1000 , "kiliane", None), ("C"*1001 , None, "SMILES too long"), ("C\nC", None, "Unexpected character at position 2"), ("CCCC(C", None, "Unclosed branch at position 6"), ("CCCCCC)C", None, "Cannot parse SMILES at position 7"), ("[U]", "uranium", None), ("", "vacuum", None), ("C1CC23CC4CC3C1C(C2)CC4", None, "Unsupported structure"), ("C#N", "hydrogen cyanide", None)): computed_name = computed_errmsg = None try: computed_name = smi2name(smi) except NamingError, err: computed_errmsg = str(err) if (name != computed_name or errmsg != computed_errmsg): raise AssertionError("SMILES: %r expected (%r %r) got (%r %r)" % (smi, name, errmsg, computed_name, computed_errmsg)) print "All tests passed." if __name__ == "__main__": test()and here are the timing numbers I got Total time: 79.91 Time per compound: 0.01 59.450u 12.400s 1:20.11 89.6% 0+0k 0+1io 0pf+0wAt 80 seconds there is definitely a performance hit using a pty, probably from terminal cooking through I didn't try to track it down. Compare that to the subprocess pipe code which runs in 25 seconds and the command-line interface at 15 seconds. Still it's better than the first version which restarted mol2nam for every call and took nearly 370 seconds. Unlike the subprocess version it uses the mol2nam provided by OEChem (no recompile needed) and unlike mol2nam by itself it reports exactly which structures could not be parsed. Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
http://www.dalkescientific.com/writings/diary/archive/2005/04/17/wrapping_command_line_programs_III.html
CC-MAIN-2015-40
refinedweb
2,194
64.91
I have some process that computes things, and every now and then, I save a graph on the disk. It’s useful to keep the file opened to look at the early results as the process is running. On Ubuntu, the image is simply updated whenever a new version of the graph is saved. However on Windows, the call fig.savefig(<filename>) returns an error if there already exists a file of the same name and that the file is opened (if the file is not opened then it just overwrites the already existing one without issue). The error is: OSError: [Errno 22] Invalid argument Here is a minimal example: to get the error, run this and open the file myplot.png import matplotlib.pyplot as plt import math import time x = [] y = [] for t in range(100): print(f"t={t}") time.sleep(1) x.append(t) y.append(math.sin(t)) if t % 5 == 0: fig, ax = plt.subplots(1, 1) ax.plot(x, y) fig.savefig("myplot.png") I guess it would have to do with the image-viewer on Windows locking the file? Anyone knows a fix? Source: Windows Questions
https://windowsquestions.com/2021/09/29/on-windows-fig-savefig-cant-overwrite-existing-file-when-it-is-already-open-oserror-errno-22-invalid-argument/
CC-MAIN-2022-05
refinedweb
194
76.11
This tutorial shows you how to get started with TensorFlow, a machine learning platform for Python. You’ll learn how to set up a development environment, install TensorFlow, and create a simple machine learning model. Introduction to TensorFlow and machine learning Machine learning is a subset of artificial intelligence (AI) that enables computers to learn from data rather than being explicitly programmed. TensorFlow is an open-source software library for machine learning that was developed by Google. The term “tensor” in TensorFlow refers to the data that flows through the graph. A graph in TensorFlow is a series of computations that take place in order to achieve a specific goal, such as training a machine learning model. Setting up your TensorFlow environment This tutorial will show you how to set up a machine learning environment on your local machine using TensorFlow. TensorFlow is an open-source software library for Machine Intelligence created by researchers at Google Brain. It is used by major companies all over the world, including Airbnb, HSBC, Intel, and Samsung. In this tutorial, we will cover the following topics: – What is TensorFlow? – Setting up your TensorFlow environment – Creating your first TensorFlow model – Training and evaluating your model – Deploying your model – Resources for further learning Basics of TensorFlow programming TensorFlow is a powerful tool for doing machine learning and other complex computations, but it can be difficult to get started. In this tutorial, I’ll walk you through the basics of programming in TensorFlow so that you can get started building your own machine learning models. TensorFlow is a programming language designed for doing machine learning and other complex computations. It was created by Google and released as an open-source project in 2015. TensorFlow is based on the idea of using a graph to represent computations, with the edges representing the flow of data between nodes. This makes it very efficient for doing large-scale machine learning, because the graph can be parallelized across multiple cores or GPUs. To use TensorFlow, you first need to install it on your computer. You can do this using pip, which is a tool for installing Python packages: pip install tensorflow Once TensorFlow is installed, you can import it into your Python programs using the following code: import tensorflow as tf This gives you access to all of the TensorFlow functions and classes. Building your first TensorFlow model In this TensorFlow tutorial, we’re going to cover how to build a simple model to predict the lifespan of a cricket based on three environmental variables. We’ll use the following steps to build our model: 1. Collect training data 2. Set up our model’s architecture 3. Train our model 4. Evaluate our model’s performance 5. Make predictions with our trained model Advanced TensorFlow concepts This TensorFlow tutorial will cover advanced concepts such as creating custom Estimators and using TensorFlow with containers. Tips and tricks for working with TensorFlow TensorFlow is an open-source software library for data analysis and machine learning. Released in 2015 by Google, TensorFlow has seen widespread adoption and is now the platform of choice for many professional researchers and developers working in the field of machine learning. Despite its popularity, TensorFlow can be challenging to work with, especially for beginners. In this tutorial, we’ll provide some tips and tricks for working with TensorFlow that will make your life easier. We’ll cover topics such as: – installing TensorFlow – creating and using Tensors – working with DataFrames – visualizing data with TensorBoard – and more! Using TensorFlow for real-world applications While academic research drives the theory and direction of machine learning, it is industrial and commercial applications that provide the impetus for its widespread adoption. This was true of neural networks in the 1980s and it is true of deep learning today. Not that there aren’t important challenges to be addressed in basic research—there are—but, rather, that if there is no compelling use case for a technology, it will not find its way into the marketplace. Fortunately, there are many interesting ways to deploy machine learning, including using TensorFlow for real-world applications. In this tutorial, we will explore some of these applications and show you how to get started using TensorFlow. troubleshooting and debugging TensorFlow programs If you are new to TensorFlow, it is best to start with some of the lower-level API functions so that you can better understand how the system works. However, even experienced TensorFlow programmers can find it difficult to debug their programs. This is especially true if you are working with large and complex models. In this section, we will discuss some common problems that you may encounter and offer some suggestions on how to solve them. One common problem is that your TensorFlow program may not be converging. This can be due to a number of factors, such as a poor choice of optimization algorithm, insufficient training data, or incorrect model architecture. If you are not using a pre-trained model, it is also important to make sure that your initialization is good. Another common issue is that your program may be slow. This can be caused by inefficient code, incorrect use of the TensorFlow API, or out-of-date hardware. If you are having trouble debugging your TensorFlow program, here are a few tips: – Use the TensorFlow debugger (tfdbg): This is a new tool that can help you inspect your TensorFlow runtimes and identify issues such as slow performance or numerical instability. – Use logging: The TensorFlow logging system can be very helpful in identifying what parts of your code are taking the most time. You can use the tf.logging.* functions to record information about your program at runtime. – Use profiling: The tf.contrib.profile module provides APIs for collecting performance data about your TensorFlow program. This data can be very helpful in identifying bottlenecks in your code. TensorFlow resources and further reading If you’re just getting started with TensorFlow, we recommend checking out the following resources: – The TensorFlow website: – The TensorFlow get started guide: – The TensorFlow tutorials: For more advanced users, we recommend the following resources: – The TensorFlow API reference documentation: – The TensorFlow code repository on GitHub: Conclusion In conclusion, we have seen how to build a simple machine learning model using the TensorFlow library. We have also seen how to train this model on a dataset and use it to make predictions. This tutorial should have given you a good understanding of the basics of TensorFlow and how to use it for machine learning.
https://reason.town/machine-learning-tensorflow-tutorial/
CC-MAIN-2022-40
refinedweb
1,091
51.48
C++ and Beyond 2012: Scott Meyers - Universal References in C++11 - Posted: Oct 09, 2012 at 6:00 AM - 92,388 Views - 46 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” Scott Meyers presents "Universal References in C++11". This was filmed at C++ and Beyond 2012. This is the full session in all of its splendor. Huge thanks to Scott for allowing C9 to provide this excellent C++11 content to the world. From Scott's recently published article in the October 2012 edition of ACCU's Overload:. Tune in. Scott's an incredible presenter and it's well worth your time to both read his article and watch his presentation on the subject. Great, t is now t'rex but it can be godzilla too... I think the terminology "Universal Reference" should be standardized... Wow. I wish they add something like T# as an alias to DeducedT&& (aka universal ref). That's incredibely error prone!!!! Hopefully only template library authors will get bald from this. Looking forward immensely to watching this, enjoyed Scotts ACCU article.is there any chance Charles of publishing the titles and approximate 'airtimes' for subsequent C++ & Beyond 2012 talks? I'm keen to watch Andre's on performant code. @dot_tom: There are no specific dates to share. As I said in a previous post ( the last time you asked the same question ), it's more about timeframes (I have been instructed to release one post per month). Anyway... Here's what's coming in terms of specific pieces of content in no specific order...: Herb's sessions on C++11 Memory Model (two of them). Panel "Convincing your Boss (to use C++11 in production)". Herb's session on concurrency and parallelism. Andrei's session on Systematic Error Handling. Herb's session on the renewed meaning of __ and __ (you already know what one of the blanks is given the panel released last month....). C The perfect topic for Scott to cover. And they say C++ is complex..! Looking forward to your book Scott. Thanks for publishing, can't wait for more :) Right after watching this I had to dive into some corners of my codebase and fix / improve stuff, thanks to this presentation. Please use internet standard video i.e. flash or html5. I'm not going to install Silverlight for a single video. Mike, click "format" under the video and select HTML5. @Mike: Please actually look around the room before declaring the need for more chairs.... Here, I'll help you. See the video player above? Great. Now, look to the right. See the links under Download? You can figure it out from here... Or, just click the format button to choose the format to play in the inline player. C This video and the related ACCU article are both great and really have given me a lot to think of on this topic, it has clarified a number of issues and questions I've had working with && in my own code. Thanks Scott for giving the talk and allowing it to be shown on Channel 9 for those of us who couldn't be at the even and thanks to Charles for capturing it and getting it posted for us! Thanks for sharing this video on C9. Scott seems clearly a master in C++: he knows the subject inside out. But to me these C++11 rvalue things seem very complicated.. It may be worth noting that some C++ programmers already find C++98 non-const lvalue reference confusing with their value syntax but pointer semantics (and in fact they prefer raw pointers T* to T&, like explained in Google's C++ Style Guide), but now with the addition of rvalues things got much more complicated. C++ code can already be complex, and it may be kind of scary to debug some complex C++ code with the addition of these universal references, rvalues and related rules (I sometimes think that these help [time 00:37:29] "Keeps people like me [Scott] in business" I'd like to watch some video or read some doc showing how rvalues can be used by the "practical" (non-language-lawyer) C++ programmer, or maybe rvalues are too much complex and remain in the domain of the language lawyers? @C64: >Scott seems clearly a master in C++: he knows the subject inside out. >But to me this C++11 rvalue things seem very complicated. I must admit, every time I think I've got it I read something else which makes me feel like I've got to go back to square one! So far, I find that Scott's explanation is the clearest. . Yes, no-one said rvalue references were easy. >It may be worth noting that some C++ programmers already find >C++98 non-const lvalue reference confusing with their value syntax There is no excuse for that. They should try harder. >but pointer semantics (and in fact they prefer raw pointers T* to T&, >like explained in Google's C++ Style Guide) Please don't quote the Google C++ Style Guide - it is totally boneheaded. >but now with the addition of rvalues things got much more complicated. Yes, things are definitely more complex! >C++ code can already be complex, and it may be kind of scary to >debug some complex C++ code with the addition of these >universal references, rvalues and related rules >(I sometimes think that these help [time 00:37:29] >"Keeps people like me [Scott] in business" Rvalue references do give you pause, I agree. I'm confident that things will get smoothed out over time however. >I'd like to watch some video or read some doc showing how rvalues >can be used by the "practical" (non-language-lawyer) C++ programmer, >or maybe rvalues are too much complex and remain in the >domain of the language lawyers? Hey, we are all language lawyers now! ;-) It is interesting to ponder where all of this will lead. As you point out, a lot of people thought C++98 was too complex already (but then the world is filled with lazy people :-D) In general, I view the attitude that every programming language "should be as easy as Visual Basic" as so much noise. It is clear that the future will likely only hold more complexity not less, and I find it ironic how much of modern computing is still stuck in the 1970s. There will always be a tension between appeasing beginners vs experts. The traditional line trotted out with the more complex C++ features is along the lines of "oh, don't worry about that, that's for library writers". There is resistance to that line these days - there seems to be a strong sense that C++ mustn't put beginners off by being over complex. This may well have it's roots in Bjarne's observation that C++ "had become too expert friendly" when he took up his lecturing position. We were all beginners once upon a time, so we should all therefore have sympathy with this view - however, this doesn't mean the future can be put on hold. Conversely, I would argue that it is advanced features and uncompromising performance that ultimately keeps C++ relevant. @Ben Hanson: While I don't agree with every point of that style guide, I respect that guide and I think great software was built using it (starting from the search engine used by billions of people worldwide).. I hope so I agree with you that we can't have super-easy programming languages; but I think that crossing some high-complexity line is not good. Keep in mind that code must be maintained, so its clarity is a very important point. For example, remaining in the domain of C++98/03, I think template metaprogramming kind of crossed an high-complexity line: writing, and especially maintaining and debugging template metaprogramming code is an highly complex (and error-prone) job, It is important to pay attention to not cross a "read-only/write-only complexity level". I think C++ gives very good performance and also offers good abstractions. But when, after benchmarking, we identify some hot spots and want to make the performance better, we can always use low-level pure C or even manually crafted assembly code in those particular cases. Scott lost me almost as quickly as MSDN magazine does every time I try to read it. I have never seen && in c++ code. I am not completely certain about rvalue and lvalue. The r and l stand for right and left side of the expression, right? Other than that I am lost. Still do not understand why C++, with all the additions being throw at it, is retaining the #include header file thing. That is the aspect of the language that to me makes it difficult to use and structure your code files. >While I don't agree with every point of that style guide, >I respect that guide and I think great software was built >using it (starting from the search engine used by billions >of people worldwide). I see this as a weak argument. After all, good products have been written in pure assembly. That doesn't mean we should all do it. . This could easily degenerate into a rant about how not everyone uses vi and how it's the *21st century*, but seriously let's not even go there. I *do* see your point though. >I agree with you that we can't have super-easy >programming languages; but I think that crossing some >high-complexity line is not good. I agree that there surely is a threshold. Quite where it lies is debatable. >Keep in mind that code must be maintained, so its clarity >is a very important point. Sure, we've all maintained crappy code I'm sure. >For example, remaining in the domain of C++98/03, >I think template metaprogramming kind of crossed an >high-complexity line: writing, and especially >maintaining and debugging template metaprogramming >code is an highly complex (and error-prone) job I've not had the 'pleasure' of maintaining such code. I *have* however experienced maintaining 'pointer hell - C with classes' style code. In all fairness, I wouldn't want to maintain boost style MPL libraries! Any code can be hard to maintain if it's done badly. It's true that the more complex it is, the harder it can be, but that is surely true regardless of the techniques used. Obviously calculations done at compile time are that bit harder to debug, but then I've tried to debug opaque interfaces using QT and COM before and that can be just as unfathomable. >I think C++ gives very good performance and >also offers good abstractions. >But when, after benchmarking, we identify some hot spots >and want to make the performance better, we can always >use low-level pure C or even manually crafted assembly >code in those particular cases. And this is the best we can do in the 21st century? Your approach to C++ feels more and more like "C with classes". If that is the case, then we will simply have to agree to disagree. That talk really clarified a lot of things to me - thank you, Scott! I just wished the camera man hadn't swung the camera that much; but that was just a minor annoyance. @Ben Hanson: Yes, probably this is the key point I consider programming languages just as tools to solve problems, and if the assembly language is the best tool for the job in a particular case (e.g. top performance in some hot spot), I have nothing against it (and if in other cases the problem can be solved with Python, I can use it, etc.) As for my approach for C++, I like that we don't have to be language-lawyers to be productive with this language: we can choose a "safe" subset of it and use it. I don't know what you exactly mean with "C++ as C with classes"; I like using C++ classes, including the power of destructors, smart pointers (e.g. CComPtr is convenient to do COM programming), template for containers but not for template metaprogramming, etc. As I wrote in the beginning, there are convenient constructs in C++11, like auto, lambdas, etc. that can simplify the code. I think it's possible to use C++11 enjoying these convenient features and ignoring the more "obscure" parts. @C64: Did you watch the presentation? rvalue references - and universal references - are not "obscure" features of the language.... You keep referring to features like auto or lambdas, but there's so much more to the language (like it has a memory model now....) and rvalue is a big deal. As Scott commented on the reddit/cpp thread -> "Rvalue references are a feature every professional C++ programmer is going to have to master. Even if you never use one in your code (highly unlikely, unless you don't write classes or don't care about efficiency), you'll still need to understand them in order to understand others' code. They are not a niche feature." C The quality of information in this talk was almost at STL's level. Impressive. SteveRichter> I have never seen && in c++ code. That's because it's new to C++11. > I am not completely certain about rvalue and lvalue. The r and l stand for right and left side of the expression, right? The lvalue/rvalue thing is indeed confusing at first (beginning with the fact that their names are terrible). was my original attempt at explaining this clearly, but WARNING - I wrote that a long time ago. While the stuff about what lvalues/rvalues are is still accurate, the rvalue references specification was changed several times before C++11 was finalized. So, the "rvalue references v1" described by my blog post are different from the v2 that was shipped in VC10, and the v3 that's specified by C++11. > Still do not understand why C++, with all the additions being throw at it, is retaining the #include header file thing. That's being worked on ("modules") for a future C++ Standard. I am really not a complainer, but I wish MSFT would just forge ahead and do modules on their own. From my recent experience with C++ it is very difficult to organize your code when you have to split things between .h and .cpp files. Esp when I bring my C# practices to a C++ project. Filters are not a substitute for C# folders. thanks for the explanations. Great speech! Now everything seems simple and logical The talk is great, and the concept of universal reference is a great one to understand better how things behave. It's kind of sad that the C++ committee solved perfect forwarding only when templates are used. If I have a class, say "Person", that I want to construct with "firstName", "middleName", "lastName" and "address", all as strings, I have to either write 16 constructors (since each variable could be either taken as an rvalue ref or a const lvalue ref) or a template function that takes any 4 arguments (all universal references as Scott Meyers would put it) but happens to compile only for strings (or at least only for things strings can be constructed from)... The committee probably looked at their own problems (make_shared, tuple, pair are all heavily templated) and changed the language to solve just that, it's kind of sad that I can't get perfect forwarding and universal references for my non templated code... After all, there is probably a lot more non template classes than there are template classes out there... @Charles:Yes, I watched the presentation. If such a complex feature is going to be pervasive and is a must to be learned for C++ devs, I'm not sure that would be good for the language and its adoption. It used to be that in C++ we could pick features gradually, and be productive with the language without knowing every intricacy. @C64: >If such a complex feature is going to be pervasive and >is a must to be learned for C++ devs, I'm not sure that >would be good for the language and its adoption. I await the reaction of industry with interest. It is true that anyone who thought that C++ was too complex already will hardly be thrilled with these new developments. But then that kind of person was probably unsuited to Modern C++ in the first place. There are solid reasons for introducing rvalue references into a 'by value' based language like C++. The complexity arises naturally out of those reasons. In the end, it's an entire way of thinking. If 'by reference' and garbage collection appeal more, there are other languages for that. >It used to be that in C++ we could pick features gradually, >and be productive with the language without knowing every intricacy. Do you find that policemen are getting younger too? ;-) Vincent> but happens to compile only for strings (or at least only for things strings can be constructed from)... Like const char *. In this case, perfect forwarding would be more efficient than (const string&), (string&&) overloads without a dedicated (const char *) overload. > it's kind of sad that I can't get perfect forwarding and universal references for my non templated code By definition, perfect forwarding takes arbitrary types and passes them along. Templates are how C++ works with arbitrary types. Really? With C++, you're given very sharp knives to cut through computational complexity. As such, you need to learn how to use them wisely (and effectively). When has this not been the case for C++? Further, C++ is language that affords great efficiency and performance across the hardware spectrum and with high level modern programming abstractions. It doesn't provide these capabilities for free. Sometimes you need to really understand the tools you use to engineer software systems. C++11 certainly makes some things easier (so, the constructs you've mentioned already like auto, lambdas, and also automatic memory management via smart pointers, etc...). Scott did a great job in this presentation making things very clear. C @Vincent: You have another option actually. Write one constructor, taking all of your parameters by value, then move them where they need to be. This may result in additional moves, but never additional copies (unless there is no move constructor, which is not the case for strings). For ppl confused with && STL has a great lecture series here on C9, in one of episodes he talks about use of RVR. Im kind of busy so I dont have time to pinpoint video now, but if you are interested I hope I provided you some help. Sure. Just an example: I use STL classes like std::vector and std::map, and I almost completely ignore template metaprogramming (things like std::enable_if, etc.). I don't know if inside STL implementation it is used, but as an STL client, I can just ignore it. You can ignore rvalue references as well, and your code will be as good as it was two years ago. Better, infact. Especially if you use the standard library. If you tend to write classes which just use default copy semantics, and rely on internal, well written members (from the standard library, for example) to copy themselves, then your classes will automatically have correct move semantics. They will be, infact, better than they were before this feature existed, through no effort of your own. This is, in my opinion, how almost all classes should be written anyway. If, however, you tend to write classes with specialized copy semantics, then your classes will be no worse than they were two years ago, and you can learn to use rvalue references at your own pace, when you feel like you want to supercharge your code. See, I thought we simply needed a language to write efficient native code. Kind of like C with modern programming constructs like namespaces, classes, interfaces, extension methods, tuples, lambdas, collections, ... To write large apps I will use C#. For energy efficient, quick starting standalone apps give me a native language with a familiar syntax.. Really? Me? Yes, you can use much of C++11 without understanding how any of it is implemented. This doesn't mean universal references are an obtuse topic. This is an advanced topic, but not so advanced that it should be ignored. After all, this move semantics stuff is also a part of C++11..... C Well, I doubt I will find a satisfactory way to organize my C++ code because I have to code in both .h and .cpp files. I like the way my C# code is organized with each class in a .cs file and those files are in folders organized along namespace lines. I like being able to return Tuples from C# methods and adding extension methods. Granted these are my little tics, but it still makes a lot of sense to me that the native language used by MSFT should be as similar to C# as possible and focused on doing what you need a native language to do - run with low overhead. I'm confused, because C++ has tuples. They were added in TR1 (VC9 SP1) and further refined during C++0x's development (VC10) until they reached their final form in C++11 (VC11, although we're still using faux variadics). It is true that ISO C++ doesn't have modules or extension methods, but it is awesome at tuples. neat. thanks! STL> By definition, perfect forwarding takes arbitrary types and passes them along. Templates are how C++ works with arbitrary types. That's an odd definition of perfect forwarding, isnt it? I think the important thing about perfect forwarding is that the parameters are forwarded, not really that they are of arbitrary types... Let's pretend the language supported universal references for non templates. Say for example that we had &&& as a new reference type such that if you have a function void f(MyClass&&& t) { g(forward<MyClass>(t)); } the compiler compiles this as if the code were template <class T> typename enable_if<is_same<string, typename remove_reference<T>::type>::value, void>::type f(T&& t) { g(forward<T>(t)); } I would still call this perfect forwarding (I'm speaking about the original code where there is no arbitrary types deduction)] I would appreciate to have a link to download slides without need to login :( @Cyttorak: You don't have to sign in. God save my mind! Great presentation! Scott is meticulous in all details as usual. Yet, I think he missed a use case saying that universal references (perfect forwarding) only occurs in deduceable context, and otherwise 'foo(T&&)' means rvalue reference. There is a stable idiom (AXE parser uses it everywhere), where this expression being used in a non-deduceable context actually means perfect forwarding, not an rvalue reference. Like this: template <class T> struct A { T t; A(T&& t) : t(std::forward<T>(t)) {} }; template <class T> A<T> make(T&& t) { return A<T>(std::forward<T>(t)); } Here is a bigger example: @passerby: I'd actually say that your example involves an rvalue reference where reference collapsing is taking place, but your point is certainly valid. My fundamental concern is that saying that "T&&" means rvalue reference makes code harder to understand, though of course it is technically accurate (as I hope I make clear in the talk). I increasingly think that the notion of universal references is useful only in the case of function templates and auto, and all the other cases are best handled by applying the reference-collapsing rules. The implementation of A's constructor looks very strange to me, but as long as the reader understands that it's completely normal for A to be instantiated with an lvalue reference, all is clear. I don't think that calling A::A's parameter a universal reference helps here. Instead, as I said, I think the best way to understand what's going on is to understand and apply the reference-collapsing rules in the context of a template whose parameter is itself provided by applying std::forward to a universal reference. Scott Remove this comment Remove this threadclose
http://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-2012-Scott-Meyers-Universal-References-in-Cpp11?format=progressive
CC-MAIN-2014-15
refinedweb
4,099
69.62
My team was working on an activity tracking application and wanted to be able to display segments of time on a radial gauge. When implementing this feature, I initially tried using some libraries specifically designed to build arcs, but I was disappointed by their lack of flexibility. When my pair recommended looking into SkiaSharp, I was pleased to discover that not only could I build a radial gauge, but I also had quite a few options as to how to build it. In this post, I’ll start with a brief introduction to SkiaSharp, give a simple example of using it in Xamarin.Forms, and then explain how I used SkiaSharp to create a radial gauge. What is SkiaSharp? SkiaSharp allows you to draw pictures for mobile apps. More specifically, it can be used to draw 2D graphics in Xamarin.Forms. SkiaSharp is powered by an open-source Skia graphics engine originally developed by Skia, Inc. and acquired by Google in 2015. The Skia graphics engine is now used for Google Chrome, Android, Mozilla, and other Google products. Using SkiaSharp in a Xamarin.Forms Project The SkiaSharp library can be added to a cross-platform Xamarin.Forms project as a Nuget Package. After adding it to your project, you create a surface or “canvas” to draw on. Then you can draw an image or text on that surface. Assuming you are creating a Multiplatform Xamarin.Forms app, the first step is installing the SkiaSharp Nuget Packages. Install the SkiaSharp and SkiaSharp.Views.Forms packages to all of your projects (both the shared project and the Droid/iOS projects). Then, using either Xamarin.Forms or C#, create a canvas where you can do your drawing. To create the canvas in XAML, first register the SkiaSharp assembly by adding Skia to the namespaces defined at the top of the file. Then, add a SkiaSharp Canvas View to the page. I gave my canvas the name “canvas” and set PaintSurface (the event handler that does the drawing) to OnPaintSurface. <ContentPage xmlns="" xmlns: <skia:SKCanvasView x: </skia:SKCanvasView> </ContentPage> The event handler (in my case, OnPaintSample) is where you specify what you would like to draw. It is implemented in C# and receives a sender of type object and arguments of type SKPaintSurfaceEventArgs. The info property of the arguments has information on the drawing area, and the surface property or arguments let you access the canvas created in XAML. Drawing Shapes with SkiaSharp Drawing a circle To test the drawing capability, I decided to make a circle. I used SKPaintSurfaceEventArgs to initialize the the height and width of my drawing surface (via the info property) and to access the canvas (via the surface property). I then used those variables to compute a radius for my circle. //get the canvas & info var canvas = args.Surface.Canvas; int surfaceWidth = args.Info.Width; int surfaceHeight = args.Info.Height; int strokeWidth = 25; //take half of width or height (whichever is shortest) float radius = (Math.Min(surfaceHeight, surfaceWidth) * 0.5f) - strokeWidth; To draw a circle with a border, I used two different paints, choosing the “stroke” style to create the border and the “fill” style to fill in the circle. //clear the canvas canvas.Clear(); //outer circle var outerPaint = new SKPaint { Style = SKPaintStyle.Stroke, //stroke so that it traces the outline Color = Color.DarkBlue.ToSKColor(), //make it the color red StrokeWidth = strokeWidth }; canvas.DrawCircle(surfaceWidth/ 2, surfaceHeight / 2, radius, outerPaint); //inner circle var innerPaint = new SKPaint() { Style = SKPaintStyle.Fill, Color = Color.LightBlue.ToSKColor(), }; canvas.DrawCircle(surfaceWidth / 2, surfaceHeight / 2, radius, innerPaint); } Drawing an arc The original challenge that led me to investigate SkiaSharp was designing a radial gauge displaying activities throughout the day. Our code computed a list of segments to display along with the desired length and category for each segment. All I had to do was find a mechanism to draw those segments on the page. SkiaSharp ended up being a great fit for this because of its many options to draw arcs. My goal was to draw an arc that would display each segment in our list with the correct length and a color corresponding to its category. Additionally, I wanted to be able mark a point on the arc with a dot. I hoped it would look something like this: SkiaSharp has a method called AddArc that, not surprisingly, adds an arc to a path. This method receives a rectangle determining the bounds of the arc, a start angle, and the “sweep angle” which specifies the length of the arc in degrees. AddArc uses these parameters to add an arc to an existing path. Thus, one way to draw a radial gauge would be to create an individual path for each segment and add an arc with a sweep angle of the segment’s length and a color corresponding to the segment’s category. However, this approach would require drawing a path for each segment. Instead, I decided to create a colored path for each category and then add all segments in a category to its respective path. I created a green path for category A and then added all segments categorized as A to the green path. Each time I added an arc to any path, I incremented a start angle variable and used that variable to determine where in the path to place the segment. My pathA looked like this: and my pathB looked like this: When all of the segments are drawn “on top of” each other, they appear to be a radial guage. Finally, in order to draw a dot in the desired location, I calculated the radius and then simply used sin and cosine to put the circle at my desired degree location. var angle = Math.PI * (startAngle + positionOfMarker) / 180.0; //calculate the radius and the center point of the circle var radius = (originalRect.Right - originalRect.Left) / 2; var middlePoint = new SKPoint(); middlePoint.X = (originalRect.Left + radius); middlePoint.Y = originalRect.Top + radius; //top of current circle plus radius surface.Canvas.DrawCircle(middlePoint.X + (float)(radius * Math.Cos(angle)), middlePoint.Y + (float)(radius * Math.Sin(angle)), 20, circlePaint); And with that, I had just the results I wanted. By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy1 Comment Hi, Can you please share the complete source code? Thankd
https://spin.atomicobject.com/2017/11/15/skia-sharp-shapes/
CC-MAIN-2019-09
refinedweb
1,054
65.22
Crimson Editor query Discussion in 'Perl Misc' started by V S Rawat,erces instead of crimson?David Kensche, Jan 23, 2004, in forum: Java - Replies: - 0 - Views: - 788 - David Kensche - Jan 23, 2004 Issue with combo - Tomcat5 / jre1.5 / crimson.jarSadhiesh, Jan 17, 2005, in forum: Java - Replies: - 0 - Views: - 572 - Sadhiesh - Jan 17, 2005 - Replies: - 3 - Views: - 12,232 Error in java class org.apache.crimson.parser.Parser2 ?Richard, Oct 23, 2004, in forum: XML - Replies: - 0 - Views: - 743 - Richard - Oct 23, 2004 Crimson says namespace attribute must preceed other attributes -- true?Scott Sauyet, Apr 21, 2005, in forum: XML - Replies: - 9 - Views: - 2,527 - Scott Sauyet - Apr 29, 2005
http://www.thecodingforums.com/threads/crimson-editor-query.898837/
CC-MAIN-2016-07
refinedweb
111
68.1
Panja Patchi Sastram Software [EXCLUSIVE] Download Panja Patchi Sastram Software Download Panchapakshi Software Downloads and Software : Pentimento. Vergaala Shathra Sutra: The Vedic Shatr Agama (From Elements and Panjapakshi) || Free download · Panja Patchi Sastram Software Downloadlexon_35. Yvaneswara sastram software download.. New. Download. The software is made compatible with Windows 7. 31 Oct 2011 – 4 min – Uploaded by Dinesh Tiwari Download · Pancha Patchi Sastram Software Download. Book 2 is the yellow chapter. Download · Panja Patchi Sastram Software. software”. The auctioneer has a smile on his face because he knows that. 21 Jun 2014 — 24 min – Uploaded by R SagarI never thought that I would come across a software that does all the mathematics in. 28 Dec 2013 – 8 min – Uploaded by Vignan Bhamidipatty Pancha patchi sastram software download Modern. Balachandra adangal. All collected datas were entered using MS access/excel software onto computer. 12 Dec 2010 — 8 min – Uploaded by Download · Pancha Patchi Sastram Software Downloadfree download · Panja Patchi Sastram Software Download. book 2 is the yellow chapter. Download · Panja Patchi Sastram Software. software”. The auctioneer has a smile on his face because he knows that. View and Download Avaash Mobile User Guide booklet — MS Windows XP/Vista/Windows 7. Mobile. DVD with MP4 720p : Pancha/pakshi software.. By clicking the Download Button, the software will download and install onto the. 22 Jul 2016 – 3 min – Uploaded by Mahanagar. Pancha/Pakshi Software. Book 2 is the yellow chapter. Download · Panja Patchi Sastram Software. software”. The auctioneer has a smile on his face because he knows that. Accounting eBooks Software, Accountancy Books, Accounting eBooks, eBook Software for Accountancy For Free. Download · Panja Patchi Sastram Software Downloadebook accounting information systems by marshall romney rar 42 loader iclass b9b9. An illustration of two photographs. Images. A bird book Download Kbuzlenia Balboa Escutar Player Naruto Shippuden The Legend of the Ninja: Chapter 652,. in the third episode of the anime series. After the events of the. In. Microsoft application of PDF files with extra ink and reference tools. XXX. 535994173. He is a great component of show. Here, he. There is another good episode for Naruto Shippuden… Download Free Software… My Items v1.1 (Jun 6, 2009). /* This file is part of dnSpy dnSpy is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. dnSpnSpy. If not, see . */ using System; using dnSpy.AsmEditor.DnlibDialogs; using dnSpy.AsmEditor.Properties; using dnSpy.AsmEditor.TvSettings; using dnSpy.AsmEditor.XmlRpc; using dnSpy.AsmEditor.XmlRpc.Calls; namespace dnSpy.AsmEditor.ViewHelpers { /// /// Returns the value of as a string. /// public static string GetXDocumentString(in XmlAsmNode asmNode, bool selectContent = true) { if (asmNode == null) 37a470d65a termsrv.dll has an unknown checksum sp1 fpwin pro 6 full version download ReaSoft Development reaConverter Pro 7.4 Cracked Kickin’ It – A colpi di karate 720p torrent Onekey Ghost Win 7 32bit Estadistica Para Negocios Y Economia 11 Edicion Anderson Sweeney Williams Pdf Descargar Gratis EURODENT 2000.rar Kamen Rider Battride War Pc Down Silhouette Studio Design Edition Torrent Hard Reset Tablet Sonivox
https://haitiliberte.com/advert/panja-patchi-sastram-software-exclusive-download/
CC-MAIN-2022-33
refinedweb
535
59.9
- href - target - rel - title - anchor text - write the result as a json to a file Theils the requirements. All in all, the source code is still simple and not too long: from bs4 import BeautifulSoup import sys import urllib.request import json # Prep the request url = sys.argv[1] req = urllib.request.Request( url, data=None, headers={ 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:80.0) Gecko/20100101 Firefox/80.0' } ) # Get the site f = urllib.request.urlopen(req) content = f.read().decode('utf-8') # Parse the content soup =: <?php require 'vendor/autoload.php'; // Instantiate the library $web = new \spekulatius\phpscraper(); // Navigate to the test page. $web->go($argv[1]); // Extract the links $links = $web->linksWithDetails; // Write the result file While this isn't the best possible test setup, it should do, to get a feeling for the performance. The test setup could be improved by e.g., handing the whole list of URLs in to reduce time required to bootstrap and interpret the scripts and pre-fetching the HTML and handing it into the script to exclude network delays. The tests were run using the following commands: time while read url; do python3 python3-beautifulsoup-link-extractor/link-extractor.py "$url"; done < urls and time while read url; do php phpscraper-link-extractor/link-extractor.php "$url"; done < urls Performance Test Results As mentioned above, the test is not representative and lacks fundamentals. I've still included some numbers to indicate performance differences. The Python script with BeautifulSoup took the following time to process the url list: - 21.1s - 19.2s - 16.0s - 14.9s - 16.4s The average being around 17.5 seconds to fetch and process the list of 30 URLs. The PHP script with PHPScraper achieved the following times: - 12.4s - 12.7s - 10.4s - 12.6s - 12.1s The average being around 12.0 seconds to fetch and process the list of 30 URLs. This also includes processing time for additional checks for flags such as "isNofollow", etc. which hasn't been removed from the standard library. Other experiments and tests: Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/spekulatius1984/phpscraper-and-python3-beautifulsoup-in-comparison-3aj9
CC-MAIN-2021-21
refinedweb
351
60.82
JSON has become all the rage, and I have to admit I use it about half the time. The other half of the time I've moved into using this technique which I'd like to share. The server side language here is PHP, but the concepts are entirely js centric which is why I'm posting this here and not the PHP forum. The support library of the examples is prototype.js, but jQuery and others are entirely capable of this approach. With those notes let's begin. Many, perhaps most, AJAX requests I've found consist of two major pieces - a block of HTML to be inserted, and a block of instructions for doing that insertion which may be as simple as calling a single update command or as complex as linking multiple event observers to the newly inserted block. From a "read the code" standpoint I found it annoying to have javascript callback nested in one place and the server code for the generation of that code in another. Now, if you don't have any html to insert you can send back the response with "text/javascript" as the header and the prototype.js framework will evaluate the response. This makes the js side very clean. new Ajax.Request('pathtoserverscript.php?'+$('myform').serialize()); Here we have a form that has fields holding derived data from what the user can input. The fields displaying that data are readonly, and further they aren't named because for security the server redoes the calculation at submit time. Displaying the derived data is merely a convenience. The server side is also fairly clean <?php protected function updateView() { $data = $this->tax_rate_service->rateCalculator( $_GET['funds'], $_GET['year'] ); $responder = new JavascriptResponder(); ob_start() ?><script type="text/javascript"> <?php foreach ($data['percentages'] as $fund => $value): ?> $('Percentage_<?= $fund ?>').value = '<?= number_format($value, 10).'%' ?>'; <?php endforeach ?> <?php foreach ($data['adaRates'] as $fund => $rate): ?> $('ADARate_<?= $fund ?>').value = '<?= $rate ?>'; <?php endforeach ?> $('Percentage').value = '<?= number_format($data['totals']['percentage'], 10).'%' ?>'; $('CalcRate').value = '<?= $data['totals']['rate'] ?>'; <?php if( bccomp($data['totals']['rate'], $_GET['rate'], 10) === 0 ): ?> $('CalcRate').writeAttribute({'style' : 'color: #000;'}); $('SaveButton').enable(); <?php else: ?> $('CalcRate').writeAttribute({'style' : 'color: #f00;'}); $('SaveButton').disable(); <?php endif ?> </script><?php $responder->parseScript(ob_get_clean()); $responder->respond(); } The JavascriptResponder class has the duty of setting correct headers for a javascript response, it's parseScript method simply lops off the script tags which are there to get the IDE to color context the js correctly, and no other reason. The result of this approach is our language switches, but our task doesn't and so the code becomes easier to follow since another file doesn't have to be looked up. Also, the client doesn't have to deal with the callback until it needs it. The server is free to respond however it wishes. So what if we have an HTML block? Well, we could use the above approach, but js instructions would have to be written to do the inserts. If we send it as json this leads to some escaping and encoding issues which, though solvable, makes the response file hard to read. For this reason I developed this approach which steps back to XML. First, the javascript side requires small group of functions that's to be loaded with other persistent libraries the site uses. var PNL = { xmlRequest: function (url, params) { if (typeof(params) == 'undefined') { params = {}; } params.onComplete = this.parseXMLResponse.bindAsEventListener(this); new Ajax.Request(url, params); }, parseXMLResponse: function ( r ) { var html = this.getXMLValue(r, 'h'); var js = this.getXMLValue(r, 'j'); if (js) { eval (js); } }, getXMLValue: function( node, tag ) { try { return node.responseXML.getElementsByTagName(tag)[0].firstChild.nodeValue; } catch (e) {} } } Now with that in place setting up a callback for an event becomes as simple as this. new PNL.xmlRequest('pathtoserverfile.php'); The server composes an xml file with two blocks, both enclosed in CData <r> <j><![CDATA[ .. javascript here .. ]]></j> <h><![CDATA[ .. html to insert here .. ]]></h> </r> The server response ends up like this <?php $responder = new XMLResponder('htmltemplatename' ); $responder['ada'] = $data; ob_start() ?><script type="text/javascript"> $('Breakdowns').insert({bottom: html}); $$('.ADA input.Amounts').invoke('observe', 'change', updateTotals ); $('ADADelete').observe('click', function(ev){ $$('.ADA').invoke('remove'); updateTotals(); }); if (!$('BreakdownTable').visible()) { $('BreakdownTable').show(); } </script><?php $responder->parseScript(ob_get_clean()); $responder->respond(); The XMLResponder extends off the php side's template parser. This allows the templates used for composing this javascript response to also be used to assemble the page itself on page reload, or if ajax isn't present. The details of that are out of the scope of the post - the point of the example is to again show how the thought process for writing the a fairly complex callback can be streamlined. I'm guessing that there might be a way to do this in json, but I don't think json can do it in fewer characters. The xml markup is very compressed, and any json block would have to have a LOT of character escaping going on the moment the HTML gets complicated. Debugging the response block is made painful by that escaping so I would rather not deal with it. BTW, this technique also presents a rare example of a properly used eval statement. I've found that letting the server choose the callback arbitrarily rather than try to pick from a menu of preloaded, precanned callbacks is very powerful and flexible. It also helps with planning since you don't have to worry with a callback until you actually write it.
https://www.sitepoint.com/community/t/a-use-for-and-return-to-xml-responses/91548
CC-MAIN-2017-09
refinedweb
912
55.54
means of mapping memory address space to devices external (IO) to the CPU, that is not memory. For example (assuming a 32-bit system) - Flash could be mapped to address. Figure 2. Address Decoding with port map () LPC40xx memory map Figure 3. LPC40xx Memory Map From this you can get an idea of which section of memory space is used for what. This can be found in the UM10562 LPC40 file. /* The goal of this software is to set the GPIO pin P1.0 to low then high after some time. Pin P1.0 is connected to an LED. The address to set the direction for port 1 GPIOs is below: DIR1 = 0x20098020 The address to set a pin in port 1 is below: PIN1 = 0x20098034 */ #include <stdint.h> volatile uint32_t * const DIR1 = (uint32_t *)(0x20098020); volatile uint32_t * const PIN1 = (uint32_t *)(0x20098034); int main(void) { // Set 0th bit, setting Pin 0 of Port 1 to an output pin (*DIR1) |= (1 << 0); // Set 0th bit, setting Pin 0 of Port 1 to high (*PIN1) |= (1 << 0); // Loop for a while (volatile is needed!) for(volatile uint32_t i = 0; i < 0x01000000; i++); // Clear 0th bit, setting Pin 0 of Port 1 to low (*PIN1) &= ~ it's a lot of work. You have to go back to the user manual to see which addresses are for what register. There must be some better way!! Take a look at the lpc40xx.h file, which It is located in the sjtwo-c/projects/lpc40xx_freertos/lpc40xx.h. Here you will find definitions for each peripheral memory address in the system. Let's corresponds to the GPIO port 1. - If you inspect LPC_GPIO_TypeDef, you can see the members that represent register DIR and PIN - You can now access DIR and PIN registers in the following way: #include "lpc40xx.h" int main(void) { // Set 0th bit, setting Pin 0 of Port 1 to an output pin LPC_GPIO1->DIR |= (1 << 0); //// Set 0th bit, setting Pin 0 of Port 1 to high LPC_GPIO1->PIN |= (1 << 0); //// Loop for a while (volatile is needed!) for(volatile uint32_t i = 0; i < 0x01000000; i++); //// Clear 0th bit, setting Pin 1.0 to low LPC_GPIO1->PIN &= ~. Revision #12 Created 3 years ago by Admin Updated 6 months ago by vidushi
http://books.socialledge.com/books/embedded-drivers-real-time-operating-systems/page/lpc40xx-mcu-memory-map/export/html
CC-MAIN-2021-17
refinedweb
376
80.51
frexp, frexpf, frexpl − convert floating-point number to fractional and integral components #include <math.h> double frexp(double x, int *exp); float frexpf(float x, int *exp); long double frexpl(long double x, int *exp); Link with −lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): frexpf(), frexpl(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L; or cc -std=c99 The frexp() function is used to split the number x into a normalized fraction and an exponent which is stored in exp. The frexp() function returns. No errors occur. For an explanation of the terms used in this section, see attributes(7). −4 frexp(−4, &e) = −0.5: −0.5 * 2^3 = −4 Program source ); } ldexp(3), modf(3) This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
https://man.linuxtool.net/centos7/u3/man/3_frexpf.html
CC-MAIN-2021-31
refinedweb
148
58.18
NAME mount, umount, umount2 - mount and unmount filesystems SYNOPSIS #include <sys/mount.h> int mount(const char *source, const char *target, const char *filesystem. Appropriate privilege (Linux: the CAP_SYS_ADMIN capability) is required to system. Bind mounts may cross file system boundaries and span chroot(2) jails. The filesystemtype, mountflags, and data arguments are ignored. MS_DIRSYNC (since Linux 2.5.19) Make directory changes on this file system synchronous. (This property can be obtained for individual directories or subtrees using chattr(1).) MS_MANDLOCK Permit mandatory locking on files in this file system. . This flag provides a subset of the functionality provided by MS_NOATIME; that is, MS_NOATIME implies MS_NODIRATIME. MS_NOEXEC Do not allow programs to be executed from this file system. MS_NOSUID Do not honor set-user-ID and set-group-ID bits when executing programs from this file system. MS_RDONLY Mount file system read-only. mutt(1), that need to know when a file has been read since it was last modified. MS_REMOUNT Remount an existing mount. This allows you to change the mountflags and data of an existing mount without having to unmount and remount the file system. source and target should be the same values specified in the initial mount() call; filesystemtype is ignored..AGAIN A call to umount2() specifying MNT_EXPIRE successfully marked an unbusy file system as expired. ’/’. 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/hardy/en/man2/umount2.2.html
CC-MAIN-2015-48
refinedweb
242
51.55
This essay below from Ralph Ellis was posted in comments a couple of days ago, and I decided to promote it to a full post. For the record, let me say that I support some of the renewable energy ideas, even putting money where my mouth is, putting solar on my own home and a local school. However, neither project would have been possible without state subsidies. For renewable energy to work in our economy, it must move past the government subsidy stage and become more efficient. It took over a hundred years t create our current energy infrastructure, anyone who believes we can completely rebuild it with the current crop of renewable energy technologies is not realistic. – Anthony Renewable energy – our downfall? By Ralph Ellis’s’s reaction to this steamrollered, image-based decision-making process has been muted to the point of being inaudible, and I can only assume that either very few in the media have any grasp of the calamitous implications of the government’s policy, or they are cowering behind their desks for fear of losing their jobs. So why, then, do I consider renewable energy to be a danger to the entire nation, both economically and socially? This is, after all, ‘free energy’, ‘free’. In fact, the conversion process from ‘free’ renewable energy to usable grid electricity is remarkably expensive and its enormous costs are being subsidised by the consumer. In the UK, this subsidy is achieved through Renewables Obligation Certificates, the cost of which are eventually passed onto the consumer. In 2006 the cost to consumers was £600 million, and this is predicted to rise to £3 billion in 2020. 1 That is about £200 per household per annum, on top of current energy bills, for the privilege of using of ‘free’ – just an array of perfectly silent, gently rotating wind-turbines stretching towards the horizon – it is dream-world picture direct from the cover issue of an environmentalist magazine, and the answer to a politician’s prayers. In one master-stroke the environment is magically healed, and votes are captured by the million – roll on the next election.. But what has all this doom and gloom got to do with the government’s drive for renewable energy, you might ask? Well, the entire problem with renewables – almost all renewables – is that they are dangerously intermittent power sources. Perhaps the first renewable source we should discuss is tidal power. Unfortunately, while tidal power initially looks like a dream power source of cheap, renewable energy, it suffers from massive variability in supply. The energy that it produces is tidal, and the tides are, of course, linked to the orbit of the Moon, with there being about two tides every day. This sinusoidal tidal pattern produces four slack periods during each day when the tide is turning, either at high tide or at low tide, and during these slack periods the tidal power system will not generate any electricity at all. Unfortunately, the energy that is produced is therefore delivered at set periods of the day which are connected to the orbit of the Moon, rather than our daily lives, and so the electricity produced is in no way synchronised with the electrical demand cycle. If these slack periods coincide with the 7-am and 7-pm peak demands for electricity, as they will several times a month, then the whole generating system is next to useless. Since the energy produced earlier in the day cannot be stored, as will be explained later, extra generating capacity will have to be brought on-line to cover the deficiency. This means that for every tidal system installed, a conventional power station will have to be either built or retained to ensure continuity of energy supply. But this power station will have to be up and running all the time, what is known in the industry as ’spinning-reserve’, as it takes up to 12 hours to bring a power station on-line from a cold start-up. Thus if we are to maintain continuity of supply, this wonderful ‘free-energy’ tidal source actually results in twice the cost and saves very little in the way of hydrocarbon fuels. So, unless we are prepared to accept rolling power cuts across the country, which would result in the same chaos as the Northeast blackout, it is unlikely that we could ever successfully integrate large tidal power systems into the National Grid. While tidal power may be predictably intermittent, wind power is even more problematical. Recent EU directives have stipulated that some 40% of electricity should be powered from renewable resources by 2020. If this were to be predominantly produced from wind turbines, as is likely, then we would need some 30 gigawatts (gw) of wind generating capacity. To put that figure in perspective, the UK currently has about 0.5 gw of wind capacity. However, that is not the full story, for UK wind turbines are only currently delivering about 25% of installed capacity, due to wind fluctuations and maintenance issues. That means we actually need some 120 gw of installed wind generation capacity to cover just 40% of total UK electrical demand. If the turbines being constructed average 2 mw rated capacity, then we shall need some 60,000 wind turbines to be installed over the next twelve years. And where shall we erect all those? – Certainly Not In My Back Yard. But building thousands of wind turbines still does not resolve the fundamental problem, for the real problem here is the enormous scale of wind variability. I saw a wind-power spokesman the other week on the flagship BBC Hardtalk series, who claimed that the number of days without wind power in the UK were as rare as hen’s teeth – a comment that went totally unchallenged. Well all I can say, is that the hens in the UK must look like a Tyrannosaurus Rex. The truth of the matter is that there are numerous days without significant winds across the UK, and when those conditions occur it doesn’t matter how much installed generating capacity we have, for it all goes off-line. A report from Denmark 2 indicates that the Danish ‘wind carpet’, which is the largest array of wind turbines in Europe, generated less than 1% of installed power on 54 days during 2002. That is more than one day every week of the year without electrical power. However, if we broaden the definition of ‘without power’ slightly, the same Danish ‘wind carpet’ generated less than 10% of installed capacity for some 16 weeks during 2003. Yet Denmark has the same kind of northerly, maritime weather systems as does the UK. Thus the wind-generation industry is lying to us, once more, for a ‘wind carpet’ that generates less than 10% of installed capacity it next to useless, for the national electrical grid will never cope with such a massive reduction in power supply. In fact, wind generation is so useless, that Denmark, Europe’s largest wind generating nation by far, has never used any of its wind-generated electricity – because it is too variable. It is almost impossible to integrate wind power into a normal generating grid, and so Denmark has merely exported its variable wind supplies to Norway and Sweden. 3 These nations can cope with these electrical fluctuations because of their abundance of hydro-electric power, which can be turned on and off quite rapidly, unlike most other generating systems. This revelation, that wind power is totally unusable, brings us onto the other great lie of renewable energy proponents – the lie that renewable power can somehow be stored to cope with power outages. The first of these miraculous energy storage facilities, that is said to come to the aid of the thousands of wind-turbines that lie motionless across the entire nation, is the pumped water storage system. However, this claim is utter nonsense, and for the following reasons: a. Our present pumped storage systems are already fully utilized in overcoming variability in electrical DEMAND, and so they have absolutely no extra capacity for overcoming variability in SUPPLY due to the unreliable wind and tidal generation systems. b. Pumped storage systems currently only supply a very small percentage of the grid (about 5%) for just a few hours, while wind generation systems can go off-line for days or weeks at a time, as the Danish generation report clearly demonstrates. To put this argument into figures, the Dinorwig power storage system, the largest in the UK, can provide 5% of the UK’s power generation requirements (2.9 gw) for up to 5 hours before it runs out of water. (Thus the total capacity of Dinorwig is 14.5 gwh). If the UK was entirely dependent on wind power, a wind outage lasting just two days would require 140 storage stations with the same generating capacity as Dinorwig to maintain normal power supplies (assuming average UK demand of 1,000 gwh/day). As the Danish report confirms, power outages lasting a week or more are the norm, rather than the exception, and so if the UK generated a significant proportion of our electrical capacity from wind-turbines, as the EU has argued, the lights and heating systems would be going out, the computers going down and transport systems failing all over the country. c. Pumped storage systems are not only hugely expensive to construct, the topography of Britain ensures that very few sites are available, and so we will never be able to store significant amounts of our energy requirements. These storage systems also tend to be situated in areas of outstanding natural beauty, and so – you have guessed it – the Greens oppose the very storage system they are promoting. The same kind of argument can be sustained for flywheel energy storage, compressed air storage, battery storage and hydrogen storage – for each and every one of these systems is highly complex, very expensive, hugely inefficient and limited in capacity. The much hyped ‘Hydrogen Economy’ is one of these technological cul-de-sacs. It should be stated from the outset that hydrogen is not an energy source, but an energy storage system – a ‘battery’. The hydrogen has to be created before it is used, and it merely stores the energy that is flowing through the normal electrical grid. Unfortunately for the proponents of this clean ‘energy system’, hydrogen powered vehicles and generators are only about 5% efficient. A huge amount of energy is wasted in the production, liquification and storage of the hydrogen, and so hydrogen will not be propelling our cars, nor will it be storing energy for when the wind stops blowing. In addition, hydrogen storage vessels are highly flammable and potentially explosive, and I for one would rather have a nuclear power station on my doorstep than a hydrogen facility. However, the final unsayable truth about hydrogen powered vehicles (and electric vehicles) is that we would have to double or treble the number of power stations to cope with this electrical demand. The fact that many cars would recharge overnight would be useful in evening out electrical demand, but the number of power stations in the UK would at least double. Now what would the Greens have to say about that? In short, it would appear that some of the proponents of these storage systems simply have no concept of the huge amounts of energy that a nation like Britain uses within a normal week. There is no energy system available that can remotely be expected to replace renewable energy resources, while they lie dormant for weeks on end. These and other delusions that are being being peddled by renewables proponents are downright dangerous, as they give ignorant ministers in government the impression that we can maintain this nation on renewable energy supplies. But nothing could be further from the truth, and the 2003 blackouts demonstrate the seriousness of the consequences if we do run out of electrical power. Nuclear But if the large-scale use of renewable energy systems is utterly impractical, there has to be a solution to our energy supply problems; because even in the short term our dependance on foreign oil and gas places us at the mercy of oil and gas owning despots, who will seek to gain every leverage possible over us. Look at the current situation in the Middle East and Russia and multiply that by ten, and you have some idea of our future political situation if we become solely dependent on foreign energy supplies. In addition to this – for every year we delay in getting reliable and internally sourced energy supplies, millions of tonnes of a valuable mineral resources are literally going up in smoke.. That is neither an exaggeration nor a joke, for absolutely nothing in our modern world will work without adequate energy supplies and petrochemical raw materials to make the things we so often take for granted. What ever you may think about the technology, the ONLY reliable answer to our energy supply and global warming problems for the foreseeable future is going to be nuclear power (either fission or fusion). pile’ in the Manhattan Project (circa 1943). Remember that graphite and steam are an explosive combination if they get hot enough, and that’s exactly what happened at Chernobyl (this was NOT a ‘nuclear’ explosion). This arrangement should never have been allowed at the design stage, which is why the British AGRs (Advanced Gas Reactors) used an inert gas coolant. In addition, both the AGR and the the USAs PWRs (Pressurized Water Reactors) are naturally fission-stable, and their very nature will resist and counter a runaway thermic event like that which occurred at Chernobyl. While the early designs of nuclear power stations have highlighted the problems that poor design or construction can pose, our design and technological capability has moved on in great strides. The Russian RBMKs are the equivalent of a model T Ford, the British AGRs represent Morris Minor technology from the ’60s, but we are now capable of producing Bugattis and Ferraris – which provide a quantum leap in terms of safety and efficiency. The point is that there are methods of reducing nuclear risks if we put our minds to it, and the latest design from Westinghouse – the AP1000 – will be able to deliver ten times the efficiency of the reactors in current use. (Which makes it odd that the UK government have just sold Westinghouse to Toshiba of Japan, just as orders for new power stations are about to be signed.) Therefore, we could supply Britain’s entire current and future energy requirements with nuclear power, while only using the same amount of nuclear material that is in circulation today (and which produces just 20% of our needs). Remember also that nuclear power is non-polluting in terms of greenhouse gasses, acid rain and other noxious emissions, and thus all of the reductions that we aspire to make in these pollutants could be achieved in a stroke if we turned to nuclear power. And when it comes to nuclear safety issues, let us not forget that thousands of people in ships and submarines live in close proximity to nuclear plants with no ill-effects. Also remember that while nuclear power has acquired a bad name, courtesy of some sections of the media, far more ecological damage has been done and many more people have died though oil and coal extraction, over the past decades, than in nuclear power incidents. Remember Piper Alpha, Aberfan, Torry Canyon, Exxon Valdes, etc: etc:? The list is almost endless, especially if one includes all the coal-pit disasters in Russia and China, from which much of our energy, in terms of finished products, is now sourced.’s very own technological Taliban, and in this guise they must be vigorously opposed. However, it should be borne in mind that fission power is only a temporary stop-gap that will maintain our economy and civilisation over the next century until something better comes along. Nuclear fusion may well be that brighter future, but for all the reasons already given we need a solution now, not in 30 year’s time. Nuclear fission will provide a stop-gap for that vital century, but fission power on its own is a non-renewable energy resource. The way forward has to be fast-breeder fission, where the nuclear core creates its own fuel supply, a technique that has already been demonstrated and perfected. This energy source would provide the world with 1,000 years of energy, a large enough stop-gap to allow all kinds of new exotic energy sources to be discovered and exploited. We have about 30 or so years before the shortage of oil becomes acute and our economies and societies begin to falter, and that is not very much time in which to alter our entire energy production industry. It is like relying on the Victorians to plan ahead and ensure that we still had a viable civilisation in the 1930s. And while the Victorians were both successful and resourceful, history demonstrates that new sources of raw materials were never actively planned until the old sources were in desperately short supply or worked-out completely. However, the introduction of a new, nationwide power generating system is an extremely long-term investment, and if we are to make this change without a dramatic interruption to our energy supplies (and our society) we need foresight, vision and a quick decision. What we need is a tough, educated, talented, rational leader to take a difficult but responsible decision to dramatically increase our nuclear energy production capability. However, what we have in the UK is Gordon Brown! Ralph Ellis June 2004 1. David Derbyshire, Daily Mail 5th Feb 2008. 2 & 3 Hugh Sharman, Why wind power works in Denmark. 382 thoughts on “Renewable energy – our downfall?” I would like to add – regarding the “hydrogen economy” If you burn H2 in cars, the exhaust output is not CO2 as in regular fuels, but H20 – water. That might make your average greenie moron swoon with delight, except that they dont realise that H20 is many many many times stronger a greenhouse gas than C02. If you believe in AGW, then the WORST thing you can put into the atmosphere is water vapor, and that is exactly what Hydrogen fuel will do. “One of the mysteries of our time is how impossible it is to interest people in the mind-boggling sums cited by governments all over the world as the cost of the measures they wish to see taken to “stop climate change” This article might make an interesting read The Greens oppose technical progress because it is necessary to destroy it as part of their political agenda to destroy capitalism and globalization. Green Peace ceased to be an environmental group a long time ago. It merely uses the environment as a cover. Ralph, Clearly a heartfelt, passionate argument for the continuation of a civilization on the basis of a strong requirement for and dependence on continuous and reliable supplies of energy. It would be good to see an inclusion of Solar in this exposition, if only to lay out the differences that may exist, if any, as compared to Tidal or Wind.. What a clear, succinct and erudite essay, written in a style that everyone can understand. It is just the sort of article to help counter some of the unopposed lunacy we face in coming years. Ralph Ellis should – and has a responsiblity to – send it to every mainsteam paper in UK and to the Sundays too for publication as an article rather than a letter. But perhaps he has and it has been spiked – that would be no surprise in these days of selective information management. Small point – Is the date at the bottom correct? No argument from me, a very sensible outline of the power predicaments. Thorium might have got a mention, inherantly stable , on paper, more of the raw material than uranium and less long term waste disposal problems (but we have safe solutions already for uranium) but have yet to see a working prototype. I read that the head of Germany’s power production resigned over the impossibility of integrating the 17% of their power produced by wind into the grid for the same reasons that Denmark has, the need for spinning backup to equal it. When it comes to nuclear fission energy production we absolutely DO NOT want to produce Bugattis and Ferraris. Try Hondas and Toyotas . interesting! and the problem of nuclear waste? how to be addressed? I admire that phrase, “A technical Taliban.” It puts things in a nut-shell, and nicely fits a bumper sticker. Due to an icestorm, much of my town went without power for over a week last December, and I can tell you it was no picnic. Once pipes started to freeze people lost their sense of humor, and when the cheap generators started burning out, (for they were never designed to run non-stop for days, ) a certain grimness filled all faces. I can now joke about everyone having “bad hair days,” and about children wishing school would open once they realized computers didn’t work, but at the time all life was reduced to just-getting-by. After getting wood for a fire, and chopping through ice to get water for the toilet, and heating brook-water for washing, and driving twenty miles to find a gas station that worked and to buy bottled water, little time was left for anything else. Tempers got short. And when the power finally came back on, gratitude was huge. Environmentalism is given a bad name by the Eco-Taliban. In many ways they remind me of pot-headed Hippies I knew back in 1969: While living in the lap of luxury at college the Hippies would criticize their parents, inventing reasons they couldn’t work the jobs their parents worked, and dreaming up marijuana-inspired “alternative lifestyles” which sounded like fun, when discussed while stoned out of their gourds. However once they attempted to make the “alternative lifestyles” real, all sorts of problems surfaced, and communes which began with high ideals swiftly disintegrated into nasty quarrels about who should wash the dishes, and where the funds were going to come from if no one worked. On the sly, most were secretly writing their parents, “Send money.” In the end the Hippies either broke down and compromised their high ideals by getting a “real job,” or they found some way to stay in college. When in a cynical mood I feel it is the ones who never left college who are attempting to run the show, these days. There are certain fundamental facts they somehow have never gotten around to facing. Alarmist, five years ago! “The (WELSH) government development plans, which are legally binding, are far in advance of anything planned for England or Scotland and would see it become energy self-sufficient in using renewable electricity within 20 years and reduce waste to zero by 2050.” “We are committed to making annual 3% cuts in greenhouse gas emissions from 2011” Wales, part of the United Kingdom with a population of just under 3 million Hi With regards to the Danish wind-carpet could you please provide a link/source? Don’t get me wrong. As a Dane myself I am fully aware of the problems associated with wind and am by no measure a fan of the way it is being utilized as of the moment. But 54 days at that low production does not resonate with the data available in from the Danish authorities which provide data for the production on an hourly basis. Thus, just curious as for you source? On a different but related note I have an ambivalent view on not only the supporters of wind power/’green power’ but also the opponents. Without a doubt wind power is a lot more expensive than many other forms of energy. However as a liberal in terms of economics I also support the view that efficiency is not the controlling parameter for whether a product belongs in the marketplace. Sure price is important but we all know that a lot of products we use every day is not ‘efficient’ in a strict economic sense. We wear shirts and jeans which cost more than the cheapest available. We don’t eat the cheapest food. We don’t drive the most cost effective car and so on. We let so many other parameters influence our choices which is a good thing – after all we are humans and not homus economicus. My problem with wind power is that opponents to wind shouldn’t be ‘fighting’ the implementation of wind but instead the way it’s being done. Rather than government subsidies directly or in the form of tax reductions it should be possible for consumers to purchase the kind of power they would like. Obviously there is a demand for ‘green’-energy. Well then let people be able to sign up for ‘expensive’ power and pay twice the price for their electricity. I couldn’t care less what my neighbor pays for his electricity as long as he has a choice. If he e.g. lives in California let him pay the added cost of the product he demands and there will be a supply from wind farms on the coast or from solar plants in Nevada where they are able to ‘store’ energy in salt-silos and thus address one of the issues of reliability of the energy-source. Again sure it is more expensive but as I don’t care if people shop for their clothes in Walmart or on Rodeo Drive I don’t have a problem with ‘greenies’ putting their own money where their mouth is. Only problem it is not possible as it is today because of how the energy markets are designed with Government having way to much influence. We all should know how that inevitably distorts markets and supply/demand mechanisms. Not an easy task due to the nature of the product (energy) which is futile and hard to store. The storage problem can be worked on however. In Nevada, Spain and Germany the use heated salt as a medium for storing energy. In Sweden and Norway the use wind power to pump water back up into higher grounds so that hydropower can be used more reliably. All more expensive forms of energy-production for sure, but again who cares as long as there is a market based demand for the products, where some consumers based on whatever ‘logic’ attributes ‘quality’ to that form of power-generation. In terms of politics the advantage of a ‘market’ based discussion instead of the current pro/against one is that it will clearly demonstrate to the greenies if in fact people are willing to pay for the energy or not. It will in fact shift their focus to addressing people and consumers rather than lobbying behind closed dors with government. Slightly off topic for which I apologize. More on topic I support the idea that more nuclear plants be build. Great angle that it is done to ‘alleviate’ the Chinese coalminers of their sufferings. Now who could argue with that? Opposing nuclear energy due to ‘fear’ of local death and destruction is in a sense supporting exporting IRL perils to the poor coalminers in China, South Africa, Columbia etc.. Sorry for misspellings and bad language. And thanks to Anthony and contributors for a great blog which is enjoyed regularly. Anthony, Given the sentiments expressed in your preface, I would welcome a short article, or series, on all of the more outlandish concepts for energy generation and storage or sourcing that are being generated. One of my favourites is the possibilty of recombining CO2 and H20 to form methane, by making use of the enormous pressures and temperatures at the mid-ocean thermal vents. But there may be a huge number of similar kinds of interesting speculative ideas, which at the least may give us some hopefull discussion as a couterbalance to the undeniable eeyorism of the ‘greens’. Straight from the horse’s mouth “First Minister Rhodri Morgan today said all parts of Welsh society will have to play their part to fashion a sustainable future and a Wales fit for generations to come.” Within the boat building industry, they’ve been wrestling with the problems of providing electrical power afloat for many years. On page 38 of Professional Boat Builder one will find an article by Nigel Calder, in which he describes his efforts to advance the cause of full personal comfort at maximum efficiency. See also page 136 In his article, Nigel makes reference to previous articles he has written, on the subject, all of which can be retrieved via the Archive button. They are a “must read”. Granted it doestake money to purchase the equipment that would enable a family to come off the national grid, whilst still retaining all the mod cons, but the method is described in detail and were I considering building a new house, then making full use of every amp of self generated electricity and every BTU of fuel I would gather for heating and cooking would be properly assessed on a cost benefit basis and not using the calculations of Mann, Steig, Santer, Hansen and the rest of nutters who want to bankrupt the world economies. Fuel prices were once set by business considerations. Now those prices are controlled by corrupt politicians to have power over the population. How long will it be before we have laws that demand all types of fuel are purchased from state owned owned suppliers and no, you cannot gather fallen branches anymore either! Are our 800-year-old rights to gather firewood for the chop? Hmmmm, Nice essay. Just to add to the nay-saying on pumped storage; Mr Ellis didn’t emphasise the point that in a pump-storage energy cycle, the reserves have to be repenished a the same time as the primary source is again taking the load. Again this means installing much larger primary capacity than would otherwise be the case. If the primary source has only enough capacity either re-fill the tank or meet the load what use is that? Some very good points are made, especially with hydrogen but, as usual, there is a tendency to go too far in making those points: 1. The costs of wind or underground storage are as nothing compared to nuclear plants, particularly decommissioning costs. Sellafield is being decommissioned now and the costs are spiraling out of control – the latest estimate being 80 billion. And just ask Norwegians how they feel about nuclear reprocessing from the Thorpe plant. Many other plants need decommissioned too as well as the massive and costly operation to build new plants. There just isn’t the money, which will soon be realized by the powers that be! There is a place for nuclear – particularly the newer designs and the Thorium plants but there is a very good argument for diversity too, particularly if your main worry is domination by the fuel suppliers. 2. Nobody, not even the EU, has suggested that wind power be the number one provider of electricity so all of the scary scenarios suggested are just not applicable. If wind forms between 20% – and experience suggests 20% is a good minimum assumption for several countries because you just don’t get doldrums on most windmill sites. As such very windy sites are limited it naturally limits windmill expansion anyway. Look at the Danish wind energy website: they have a FAQ which points out some of the basic accountancy errors that are made too often by journos and economists. 3. Those blackouts talked about arose under fossil fuels domination. There is an odd idea going around that coal plants are flexible to cope with demand. Well they aren’t! You get around these problems by anticipating demand and oversupplying. 4. There are significant gains to be made by geothermal heating which is still in its infancy. If we spent money on that instead of being obsessed with buying a new car every year – perhaps even with low interest credit then the money wouldn’t be an issue. Imagine if every new house was forced to install it and hence be forced to suffer at least 50% savings on heating/cooling costs. Agree with the article. Nuclear is cheaper than coal, available, safe, clean and raw fuel is abundant (for a one or two thousand years into the future). The amount proposed to be collected by Cap&Trade over the next 8 years would pay for all the plants needed in the USA. If, as the greens say, “we let the science decide”, then Nuclear is a “no brainer” — if one is serious about sustainable energy, AGW, and/or the environment. “Technological Taliban”: now that is a catchy phrase. This article in very instructive concerning the practical problems of energy generation and the need for a renewed investment in nuclear energy, but the author should read “The Ultimate Resource 2” by Julian Simon before saying utterly stupid things like “history demonstrates that new sources of raw materials were never actively planned until the old sources were in desperately short supply or worked-out completely”. For example, the move from coal-power to oil-power did not happen because we were running out of coal, or because coal was in “desperately short supply”, it was done because oil products were more efficient and could be transported more easily. The “resources are finite” meme needs to be put out of its misery once and for all! Interesting post which sets out clearly the drawbacks associated with renewable energy sources, but perhaps overly pessimistic about the availability of oil in the future in the light of Steven Goddard’s post on this site titled ‘ Energy Availability Is Almost Infinite’ I see this problem with many greens, the inability to see scale. The eyes, of those I have met at least, glaze over once you get much past ‘mega’. It is as though they see ‘mega’ as being a big number and therefore enough. I find it difficult to get them past that and into 9, 12 or 15 zero’s. So that when you get to a UK reasonable 350, 000, 000, 000, 000 W the ‘green fuse’ has blown and they are in arm waving spasms. The conversation at that point has shifted from practical supply problems to rationing and a move to some kind of fantasy green future. I latched onto the phrase “technological Taleban” as well. Very useful! Some more in-depth articles on nuclear power would be helpful. There seems to be a variety of views around as to whether storage and clean-up is or is not going to be a problem in future years. I know that decommissioning costs for the current generation are sizeable but I have read somewhere that the next generation will be able to re-use fuel until it virtually ceases to exist. It looks to me as if the eco-fascists arguments against nuclear power are based on the fact that it *is* clean, cheap and reliable. Wasn’t it Ehrlich who said, “Giving society cheap, abundant energy would be the equivalent of giving an idiot child a machine gun.”? As far as I recall he never explained why that should be. Which looks like a bargain next to carbon taxes. It also assumes that having discovered this hidden cost in older reactor designs new designs will make the same mistake. “In the US, the latest costing of President Obama’s “cap and trade” Bill is $1.9 trillion, a yearly cost to each US family of $4,500.” Decommissioning just looks better all the time. Defrayed across the life of the reactor (let’s say 30 years) even 80 billion per reactor looks like very good value for money in comparison. I’m convinced the future is in gas anyway: Much cleaner, more efficient efficient and abundant whether natural or by gasification of coal/shale or garbage. And cheaper than digging for a lot of the 200 to 500 years worth of coal that is uneconomic just now. It should even be possible to easily extract the CO2 by gasification. Then all you need do is find a market for it. Let’s see: You can force it down stripper wells to get oil out, you can force feed super greenhouses with it to feed the coming food demand, use it to displace water in quick-setting flexible concretes, use in the new breed of HFC refrigerators…Indeed there are some reasons to expect to make money out of CO2. Could one of your highly-qualified contributors write an essay examining the possibilities of geothermal energy, please? Apparently Nuclear Fusion is the best, as Nuclear Fission generates radioactive wastes, whilst Fusion does not. The main problem with Fusion which prevents it from becoming the next best energy source is that the reactants must be in total suspension to release maximum energy, and since levitation is currently not an option, experiments with Fusion have not released more energy out than in due to the contact between reactants and container. OT News (but very important for the solar junkies): The Solaemon page has *finally* been updated with some great graphics regarding the evolution of the current solar cycle’s activity (SC23). oh gee whiz. Let me see if I have this straight: “Renewable energy will need to be reliable.” Doh. Hi, the CEO of Eon in the UK has made it quite clear that we will need to install extra fossil fuel powered generation equivalent to a minimum of 95% of the maximum capacity of wind turbines in the UK which will be running 100% of the time burning precious gas to cope with the extreme variability of wind powered generation but only producing electricity when the wind turbines are not. This begs the question why not just forget wind turbines and use gas anyway? The overriding reason for having wind turbines in the first place was to avoid producing more Co2, the Danes have admitted that in the time that they have had wind turbines not one single fossil fuel power station has been closed and in order to cope with the extreme variability of wind power they have in fact produced more Co2 because they have needed to keep more fossil fuel power generation on line to avoid blowing up the grid. In a government report they have also admitted that whilst wind turbines did generate some electricity 80% of it went to Germany and Norway at zero cost to them, given away. But I return to my main point the unique selling point for wind power was a reduction in Co2 to avoid catastrophic climate change and on this prime issue wind power is a complete and utter failure so why are we in the UK intending to spend over £400 bn (Ed Milibands website) on this daft unproven technology. There are maybe three reasons why human will become extinct, 1, we will increase population to the extent that we will deny ourselves the habitat to sustain life, 2, our climate could get colder and a drop of 3c will be sufficient to remove all means of sustaining life (when this last occurred in Europe people were eating their children) by comparison getting a little warmer is not a challenge, 3, when oil just tips over the top and deliverable production declines by just 1% that will signify the end of our existence as we know it, your house will be worth zero, those that will survive will most likely be in Africa where they can grow and eat what they grow because in the UK we rely on 42 ton trucks running on diesel, overnight Tesco will disappear. Forget about arguing over whether or not the planet will heat up think about how we manage being without oil because its 95% of who we are and what we enjoy. Remember every wind turbine rotor blade weighs in at 6.5 tons and its all made from oil derived product and tell me when oil runs out, bearing in mind that wind turbines have a half life of maybe ten years, less at sea where 25% of the time they are out of production due to failure or maintenance, without oil how do they get serviced and replaced? Barges and cranes to my knowledge need a little diesel!! Many readers here will appreciate the story found here: MSM is quite shy about discussing such things as actual costs of these systems. The article referred to above states: There was much fanfare in the Denver Post about a windmill installed in Weld County Colorado for a school. It discussed everything except the initial cost and pay back period. Of course the author did not understand the difference between kilowatt and kilowatt-hr. One of the chief characteristics of media writers regarding science and engineering is they do not understand basics such as units of measure. This is part of problem, but then again, perhaps it makes it easier to control them when they are ignorant. I wrote a letter to the author requesting clarification on terms used and what the costs were. No response. Obviously I’m one of ‘them’, just a trouble maker not getting with the program and undermining the ‘unity’ of the country. Trouble with making energy policy political is that the technical issues are so far beyond the pay grade of the average citizen it is not possible to have a rational discussion. On nuclear waste, see how France has dealt with the matter: Oil is a renewable energy. If not used is spilled at sea, and by the action of microbes concludes that methane. It is a true air pollutant. Ironically, it was Clinton/Gore that killed off the IFR project (with prejudice) immediately after its second successful demonstration of the ability to reprocess fuel, generate excess heat (for, say, breaking down water) while Not requiring of Any cooling capacity… Fuel for 50,000 homes, for a year, would result in a thimble full of waste with a half life of 200 yr.s… once vitrified, easily disposed of into the caverness depths of the oceans… (1000 yrs. of the US Total energy needs would not noticably raise the Oceans’ ambient levels) Thus, I blame the Demicans as much as the corruption of the Republicrats for putting the US and the rest of the industrialized nations of the world at the brink of civilization’s demise. There is No problem facing humanity that we could not overcome with an abundant source of cheap energy. This, however, does not appear to be the goal of our leaders or rather, the few that actually direct those ‘leaders’ strings. Paper where some of this data on wind appears to come from Why do we feel we have a right to survive?! Because Nature made us and nature set us in competition with every other species, just like anything else in nature is in competition for existance. A virus could wipe us out tomorrow. But with us nature created something new, a monkey that could think complex thoughts and develop complex technology and culture. And that has allowed us to lighten our footprint and reduce our damage. And the more humans are born the further we can evolve. So recently we have become advanced enough to feel for the environment consciously, something which no other species is able to do. Repeat, no species is able to consciously live a green lifestyle, except us. And yet we imagine we are the problem. We can desire more diversity, but nature’s drive is also towards more intelligence. M White (03:04:05) : Regarding the stampede to a green future is Wales, be aware that the largest gas powered generating station in the UK is under contruction near Pembroke Dock. Well stated. Thank you for that summary. I am only too aware of all of the topics you mentioned, and I’m shocked that the rest of the world is not. A few observations of my own: 1. I truly believe that the majority of the citizens of the US are unaware of the complexities of power generation and dissemination. They simply believe that “green is good” and “renewable is good”, and that if Obama and his minions say it will work, it will work. I’m more afraid that Obama and his minions actually think it will work! I went to a seminar once hosted by the outgoing President of a huge power company in the SouthEast USA. He raised each one of the points you mentioned, and shot them down for the same reasons. He also came to the simple conclusion that nuclear power, done right, was the only solution for a consistent, clean, ong-term energy supply. I have come to the same conclusion. 2. I got it into my head one day to build a wind turbine in my back yard. I found some great plans on the web and began to consider how I would collect the parts. However, I sat out on my deck one evening….and realized there was no wind. For days afterwards I watched. While some days were windy (obviously), for the most part it was fairly calm. Plus the wind would start, then stop, then start…etc. I figured much of the wind power would go into just getting the turbines running again, and once they began…the wind would stop and the turbines would grind to a halt. Needless to say, I scratched that idea off of the list of honeydos. 3. In my own household, when the electricity goes out for even an hour or two…panic sets in. While manageable during the summer (more an inconvenience), it is a potentially life threatening situation in the cold of the winter. Most homes these days are not built to be warmed by the cheap fireplaces that are currently installed. Reliable, consistent energy is a necessity in todays world. Without it, there indeed will be unrest and unnecessary deaths. Ralph, Good discussion of the problems with renewables. However, I can’t get on board with the fission and particularly the breeder idea. A friend of mine used to have a bumper sticker “A little nukie never hurt anyone.” Iran has been using their “right to fission power” as a cover to build a bomb, which is probably just a few months away. They have done this with the help of Russia. North Korea blew up a nuclear weapon yesterday. The EMP from one nuclear bomb could do massive damage to the infrastructure of a multi-million square mile region. We have enough coal to last hundreds of years – plenty of time to get fusion energy everywhere. The AGW panic is driving formerly sane countries towards suicide. @David Wells (04:50:17) : You wrote in part: “[…}!” Pardon my ignorance, but I don’t get the connection between McDonalds and the destruction of the Amazon. Please explain, & thank you in advance. P.S. What you said re just going straight to and sticking with gas; good point. Thank you all for your comments, which I will attempt to answer. . >>send it to every mainsteam paper in UK >>Small point – Is the date at the bottom correct? Yes, every media outlet has had this article on numerous occasions, to no avail as yet. And yes, the 2004 date on the article is correct – I was well ahead of the game, I think. It is surprising that the article has not needed amending over all this time, and we are still steam-rollering towards a Green Armageddon. . . >>With regards to the Danish wind-carpet could you >>please provide a link/source? Here it is. This is an interesting paper, and it deserves very close and detailed reading. Clearly, the Danish experiment with wind-power has been a complete disaster, but you would never know that from Green media articles. I also hear that there have been a number of maintenance and mechanical failure issues recently with these vast offshore arrays, and it is looking like they will not last very long (or become even more hugely expensive to run). . . >>It would be good to see an inclusion of Solar in >>this exposition, if only to lay out the differences >>that may exist, if any, as compared to Tidal or Wind Perhaps I should. I left it out because solar plainly does not work in the UK. (A colleague has tripled his electrical usage and energy costs, because of a solar heating system that does not work 200 days a year and an overly idealistic decision to delete the original a heating system). But if you have traveled through the Mediterranean, it is obviously a good idea for water heating requirements in these areas, and nearly every flat and house has a simple roof-top system – but do take a shower in the evening rather than the morning. However, for electrical generation we still have the problem of storage, and there is no known system which can store the kinds of power requirements that a country needs during night hours (or, indeed, several days of overcast). . . >>interesting! >>and the problem of nuclear waste? how to be addressed? Due to Green pressure and protest, we seem to be happy with all this waste sitting in open ponds on the west coast of the UK. If it is perfectly acceptable to all and sundry to have high-level nuclear waste in open ponds, why do we not just bury it? It would be much safer, I am sure (except in Green logic). The granite of Cumbria (the region with the Selafield reprocessing facility) would be quite appropriate for burial, especially if the plan included a great underground concrete bunker too. . . >>Alarmist, five years ago! Its been a long struggle of faith vs reason. I regard this as an equivalent of the 18th century Enlightenment Era, and that battle was not won overnight either. . . >>but perhaps overly pessimistic about the availability of >>oil in the future in the light of Steven Goddard’s post I have a distrust of those nice symmetric graphs of oil reserves. The upslope contains all the world’s most massive oil fields, while the downslope will contain a miriad of worthless puddles. I think the downslope will be much steeper than portrayed, and thus we may well be at Peak Oil (the maximum supply that the world can extract per day – not the end of total reserves). . . >. . . >>Nobody, not even the EU, has suggested that wind power >>be the number one provider of electricity so all of the scary >>scenarios suggested are just not applicable. They have made a target of 40% renewables, and the main pillar of renewables is wind power (anyway, wave and solar suffer from the same shortcomings as wind). This is the proportion of unreliable energy supply that will definitely bring down entire grids. And you are greatly mistaken about so-called ‘windy sites’. There is no such thing. When the UK has a large anti-cyclone, the whole country can be effected for days or even weeks, as the data from Denmark above makes clear. Likewise the diurnal land-sea effects, which will switch off the wind like clockwork twice a day. Take a look at the following PDF. All the wind generating sites across the UK (indeed, across Europe) are in step with one another. When one loses power, they all lose power (and 15 to 20% load factor will not run a nation). There is no such thing as a reliable wind. . Eco Taliban Technological Taliban Two terms that must be bookmarked Making renewable energy cost effective and capital efficient is vitally imporant. Alvarez et al. show that Spain’s recent subsidy of renewable energy destroyed an average of 2.2 jobs for every “green” job created in renewable energy. Study of the effects on employment of public aid to renewable energy sources Gabriel Calzada Alvarez, et al., Universidad Rey Juan Carlos, March 2009 draft. The subsidy for photovoltaics destroyed 9 jobs for each renewable job created. Spain’s unemployment soared to 17.4% in 1st Quarter 2009. Concentrating solar thermal power is the most cost effective solar energy. BrightSource Energy has increased the net efficiency of power tower systems to 40%. Now to bring costs down below conventional power by mass production etc. Greenhouse issues are negligible compared with the tsunami of global peaking of light oil. The critical issue we must urgently address is developing alternative fuels fast enough to compensate for the projected 6%/year decline in light oil exports. See A Quantitative Assessment of Future Net Oil Exports by the Top Five Net Oil Exporters We need to focus on TRANSPORT Energy and liquid fuels, not “energy” per se. Last year’s tripling of oil prices showed that we don’t have “30 years” till peaking. ALL oil importing countries are already in deep trouble. I wish the article could be published in every mainstream newspaper. One of the more chuckle headed state legislators in Minnesota owns several acres of pristine rural property near the border with Wisconsin. She and her husband erected a wind turbine on their property (construction costs: tax deductible) and use it to power their small hobby farm. The state permits them to sell surplus power at retail rates to the utility company, which is obligated to buy them. On the basis of these subsidized activities, the legislator has written she doesn’t understand why we don’t just power the entire state with wind power. As I said, her scientific understanding is at a fourth grade level. Perhaps she might one day read a paper such as this which will explode her fantasy balloon. Chernobyl made a huge mess and put millions of people at risk. Imagine if the wind had of been blowing across Germany towards London that day. Consider the devastation of a Polonium-210 dirty bomb. Remember Alexander Litvinenko? David Mills shows that distributed solar energy can match the distributed load on the average. A solar-powered economy: How solar thermal can replace coal, gas and oil This reduces the level of energy storage required. “this sublime day-dream…” This “day-dreaming” is a behaviour of pseudo-gods thinking the welfare of people but it never works that way, it does not matter how good intentions our beloving masters have, “way to hell is paved with good intentions”. We have about 30 or so years before the shortage of oil Absolutely wrong!. As long as we, organic beings, exist on earth there will be organic matter dying, decomposing, forming hydrocarbons. In the imaginary and fantastic case that this would not be the case then we can synthesize oil from carbon, as germans did in WWII. This is like a never-ending-story nightmare that drives an engineer crazy. Our acquiescence to environmentalist-engineering can be said to have created the environmental problems that are now being raged about. The same crowd blocked nuclear energy back in the 70s or we would have little manmade CO2, mercury, sulphur, etc in the atmosphere. The long and short of it is, in terms of today’s arithmetic, Ralph is right, it is either nuclear or coal and in the latter, more and more we are hearing about CO2 sequestration. Am I the only one who is terrified of the sequestration idea. Let’s look at some rough arithetic. CO2 “weighs” 2kg/m cubed and emissions from coal annually are somewhere near 20B mtpy (or soon will be), equivalent to 10 trillion cubic metres at STP per year. World reserves of natural gas are about 140 trillion cubic metres, so we will be putting out CO2 essentially for coal only, at the volume of the earth’s natural gas reserves every 15 years (and climbing). Now imagine pumping this amount into underground storage (lets not visit the problem of finding sufficient storage for such volumes) under high pressure and we will be building potentially some of the world’s biggest disasters for the future. Accidents are apart of every manner of human indeavour. The very people who will be assuring us that it can be done safely are the ones who designed openings in coal mines that have collapsed, or oil and natural gas wells that have blown out of control and caught fire or unsinkable ships. With CO2, the accident could be spectacular or it could quietly leak out and fill a valley killing every living thing that isn’t a plant. It could push ground water, salt water, petroleum, natural gas, hydrogen sulphide out through fractures, leak into mines, basements, subways…. And who would be to blame? This would be another unhappy consequence of letting environmental extremists plan our futures for us. The electrical generating industry should either go ahead and weather the shrieking and sign waving and follow the only realistic options we have, or shut it down for a month until things quieten down 1. Extract – (Which makes it odd that the UK government have just sold Westinghouse to Toshiba of Japan, just as orders for new power stations are about to be signed.) 2. From comment – why are we in the UK intending to spend over £400 bn (Ed Milibands website) on this daft unproven technology. We in the UK have already seen the extent of outrageous venality displayed by our Political masters, so it could not be considered unreasonable to monitor the Directorships of companies exploiting the above technologies and the carbon trading scam to discover whether any of the present incumbents in Parliament appear either on their boards or as advisors, as a reward for their duplicitous antisocial behaviour. Fusion power, as it is being currently developed, produces a lot of radiation just like fission reactions do since an extra neutron is released in the reaction. They are using tritium (H-3 neutrons) and dueterium (H-2 neutrons) which results in a extra neutron being released when they combine to make He4. It is very difficult to fuse normal Hydrogen (H-0 neutrons) unless you have the conditions inside a star or a fusion bomb. Inside a star, 4 Hydrogen atoms fuse to make He4 (with two of the protons changing into neutrons after the release of positrons and neutrinos). The containment vessels for nuclear fusion will be extremely radioactive for a very long time but, I guess, the appropriate neutron-absorbing shielding will prevent the radiation from getting outside the containment or limiting the end-result radioactivity. His arguments are quite compelling. We can play around with new alternative energy technologies, but we must have massive investments in nuclear to bridge the energy gap from oil & gas to whatever it is that future development in technology provides. With respect to the question of where do you store the nuclear waste, it can easily and safely be stored in the Yucca Mountain site in Nevada. Only environmental opponents, politics and leftist judges are preventing this site from being used. Many will be shocked when their living standards plummet as the result of our need to compete for energy with the rest of the world. We will be competing with a weakened economy and currency. As a result, nearly everything, from food to clothing to shelter, will cost more due to a higher energy component.. There is one energy source that will remain plentiful for the next few centuries, it right here in the United States, and has an evolving technology that has reduced pollutants 95% since the 1960s and will likely address CO2 in the near future, as well. One should not dismiss renewables in the mix. They make sense – as long as one is realistic about their potential contribution. Similarly, one should not close down the current nuclear generating facilities, but to think that new nukes will be anything other than a massive black hole for money and a nightmare for waste disposal is to ignore history and to place the United States at greater risk. We just sold the farm to a guy that is going to convert it to solar power. So that is hundreds of acres of good agricultural land out of production that will be producing heavily subsidized solar power. And you wonder why the economy is collapsing? The US has huge new amounts of natural gas. U.S. Gas Fields Go From Bust to Boom By BEN CASSELMAN could hold some 200 trillion cubic feet of natural gas. That’s the the Haynesville Shale, for the dense rock formation that contains the gas — equivalent of 33 billion barrels of oil, or 18 years’ worth of current U.S. oil production. Some industry executives think the field could be several times that size. Full story – Interesting, Very well written overview on energy obstacles we all face worldwide. Hydrogen is a partial answer, as is solar and wind, but not until a new approach is used to combine these clean green technologies together will we see a viable solution. What we have witnessed over the years is a battle of which is better. This is clearly motivated by self interest groups and this is why we have seen no total solution to date, just excuses. Controlled hydrogen fusion is the answer as I have seen, but only when combined technologies are used will it become feasible from a cost basis. The problem with gov. overall is they cannot decide which direction to go, so they go nowhere fast. Subsidies are never consistant, here today, gone tomorrow, so this hinders investors to move forward in any one direction. It is not that there is no answer to our energy crisis, it is that there are no people in gov. who have a clue on how to encourage energy alteternatives, they just keep getting in the way by taking back pocket lobbyist money in U.S. which keeps true progress at bay. Witness the recent announcement from Chu who is pulling back funding for hydrogen development. The same old same old helter skepter approach is all we see in U.S. Now I wonder who paid him off. lol If we saw the Feed In Tariff adopted worldwide, we would see true progress, not by the gov. funded labs, but by the true inventors of the world, the so called backyard inventors. This is where the billions should have been spent over the past couple decades, not to these major college labs which have accomplished nothing in all these years. It is about time all countries develop a universal adoption of Feed In Tariff which will allow a profit to be made by installing solar, wind and hydrogen generators in our homes and businesses. This is how you solve current energy issues. Soon the world will see a new hydrogen technology, a closed loop system, so no emissions at all which will satisfy all the greenies out there. Just a matter of time and many new hydrogen based systems will hit market, not mega plants, but compact units for every home and business. This is where the future in energy lies, not in fusion or fission mega power plants, but small units by the millions. OT: Did you watch the Google-ad video at the top of this thread attacking hamburgers and cows as a cause of ‘global warming’? I suppose it’s ironic justice for these whackos to help pay for WUWT, but it’s disconcerting, nonetheless. I like hamburgers! Re Gary Pearse (06:06:08) : on sequestration: I too am fearful of the idea of pumping tons of compressed (liquified?) CO2 underground. While CO2 is a friendly, helpful gas (contrary to the alarmists), it can be suffocating in some circumstances. There was that lake in Africa that burped an invisible cloud of CO2 and killed everyone in a nearby village. This sequestration idea is expensive, insane folly and must be stopped before it gets enshrined into law and regulation. /Mr Lynn Thorium fission reactors are interesting. Short half life, availability of fuel, can’t make bombs from the by-products. Guess the Indians on the sub-continent have got one sorta up and running . . . M White (03:04:05) I’ll believe this when the Welsh cut their connection to the UK National Grid powered by Hydro, Nuclear, Wind (if you believe the fairy stories) and……wait for it……..COAL and GAS. There will be no Carbon fuelled power stations in Wales, they’ll just be running over the border in England and Scotland. Talk about Spin and Drivel, Welsh Windbags strike again. Mr. Hagen, I strongly agree with your conclusion that we should focus on transportation more than the electricity grid, per se, but much of the transportation needed on a daily basis could be provided with electricity in that trains and mass transits could be made to run off electricity. Ships can run off nuclear power, as well as coal, oil and even wind if you do not mind spending a few months at sea for a journey that would otherwise take a week or less. High speed trains like they have in Europe or Japan are time competitive with jets out to about 300-500 miles depending how fast the train is, how many stops it make and how you calculate the time you spend in the airport before the plane is even loaded. Cities could be designed better with smaller footprints so people could walk to where they want to go. Homes could be designed with the same yard area, and same square footage if they only built them taller, 3-4 story homes including dormers and basements would be nice, not sacrificing American standards of living or creating a “New York City” urban environment. Smaller cities in terms of area would make public transportation more usable and cost effective. Smaller denser cities would also make it less expensive to add district heating and cooling capibilities, which would reduce costs for consumers overtime. Waste heat from powerplants could be used for industrial steam as well as residential heating and possibly cooling. While reducing the distance needed to be driven by those who choose not to use public transit. Since WW2 American cities have been designed around the car. The car is not the problem, rather it is the design of American cities which are impractical to walk. Seeking a solution by merely focusing on designing different cars neglects the solution that could be created by designing different cities. I am a firm believer in clean coal technologies and I seriously doubt that it will be ignored in the future. There is too much profit to be had by developing this in-house powerful and abundant energy source. For industrial purposes, I think market forces (regardless of who is in power) will turn that way eventually as well as continue to use hydro power. I also see cities and larger industrial complexes installing the smaller nuclear plants you see in subs and ships. That said, I am quite happy using alternative sources of energy for my home and car. Solar lights, used cooking oil fuel for my car, heating, and cooking, whatever. However, right now I don’t find these sources to be of high quality or easily found. For example, solar powered porch lights, while bright enough, aren’t robust enough to handle winter weather. I bought two such lights, both with puny wiring from the solar panel to the light. One got through the winter, the other didn’t. It was the housing and wiring that was the weakest link. If you want the rank and file home owner/renter to use alternative sources, they have to be made rough and tough. I don’t mind changing light bulbs, but I do mind very much having to buy the whole enchilada every year. Bottom line, alternative energy such as would be used locally for residential purposes has to be reliable all the time, and has to last longer than a fortnight. If this would be the case, I don’t mind letting industry use hydro, nuclear, and coal fired electricity. After all, they provide jobs. Of course there are conversion technologies for coal, the Fischer-Tropsch process is where it all started back in the 1920s. Coal can be easily converted to gas, or liquid transport fuel. If you are inteserested, one of the newset coal power plants, a “coal gasification” installation has now gone online outside TAMPA FL. Coal gasification brings with it cleaner burn and higher effeciency. Its the TECO Polk power plant. And as if by magic, the USA has about 25-30% of all the world’s coal reserves. Last time I checked, in Europe, England and Germany also have massive coal reserves. Germany ran their WWII war effoert on coal conversion technology. “The critical issue we must urgently address is developing alternative fuels fast enough to compensate for the projected 6%/year decline in light oil exports.” This particular argument – we are running out of oil – has been around since I was a kid. We were always going to run completely out in 20-30 years. Of course those dates have come and gone several times over in my life. When I was a college freshman in 1979 at UVa, I took an environmental sciences seminar for a single credit. There were about 30 of us young idealogical thinkers in the class, every one of which was certain we would be out of oil well before the year 2000. The seminar leader explained why that was not true. Some believed him, others remained steadfast that we were doomed. The seminar leader was Fred Singer. Of course I’ve now read many times on sites like Real Climate that he is not to be trusted. But it seems to me like he was right about everything he said 30 years ago.. Here is a little bit of sanity !!!!! US to go for a 5% cut in CO2 over 30 years vs the goal of 30% in 30 years per the EU. May 22 (Bloomberg) — The European Union may have to scale back its goals to reduce global-warming emissions after a less- ambitious plan won initial approval in U.S. Congress. The 27-nation bloc has asked all industrialized countries to reduce greenhouse gases an average 30 percent over 30 years. The first U.S. legislation ever to cap emissions, which passed a committee vote yesterday, calls for a 5 percent cut by American industry in the period. The gap poses a potential conflict when global talks on a new climate treaty resume June 1 in Bonn. I know from first-hand experience that PV systems are not cost effective. I had a 1KW array installed. Total cost $12,000, plus I rented and ran a backhoe and helped mix and pour cement, no truck access. The avg daily output is about 4.2 KW, so over twenty years, should it last so long, the system will produce 30.66 MegaWatts, which would cost $2452.80 with conventional generation at today’s prices. This is in Colorado, with 300 sunny days a year. Instead of paying 8 cents a KW, the array produces 39 cents/KW juice. I only did it as I was told the system would provide 100% of my electricity, instead of the 25% it actually does. I’m thinking of putting in a hydrogenerator in my creek, probably a bunch of 12V DC alternators I can put into the electronics from my PV array, also a 12V DC system.. re: Alarmist 5 years ago. Very interesting paper on the UK wind farms. I would actually be happy if I thought that the wind farms would in fact be 27% efficient. What I would like to see is an analysis of the ‘minimum’ average, where all the farms (looks like June-July) were at their minimum. Those are the numbers we would have to rely on. We don’t care how much they make at their peak, or at their overall average, the problem is the fact that at times they make dramatically less than that average…. Once upon a time a visionary was a person with a possible solution to a perceived problem who worked mind-numbing hours to develop his vision and persuaded private citizens to invest in his vision. Oh, and everybody made money from that visionary. Today’s visionary takes a shortcut – it’s his vision and our money. And how will his vision become real? Details, grubby details; let other mere mortals do the hard work of making his vision real. His is the vision and that is all that matters in the environmental circus. I marvel constantly at the disconnect of people within the Oil vs Renewables Debate, there is currently no direct correlation with electrical generation via renewables and oil demand. PERIOD. You are making a technological leap of faith to the electrified transporation system, by-passing all the requirements to get there. Then to make it worse as Mr Ellis points out oil is not just Petrol or Diesel there are several vitally important industrial chemicals and by-products of refining that will need to be replaced as well. Bulding solar arrays and wind farms will not affect the oil demand unless you remake the transportation system, not just the light passenger vehicles but the WHOLE SYSTEM, trains, planes, trucks, everything. In order to electrify the transportation system, at least up to all ground based (leave air travel aside) we will need to effectively double electrical generation ( unless you create a recharge curfew people will charge their vehicles whenever they want so the capacity has to be there) . We are chasing a rolling ball down an slope because we are not building electrical generation and infrastructure fast enough today to catch the ball but are demanding that we increase the incline exponentially. We have a “peak electricity” issue coming fast because we are not replacing the aging systems we have fast enough. This is not because lack of planning it is because the ENVIRONMENTAL REGULATIONS imposed over the last couple decades have increased costs 20% and planning lengths by 2-10 years to provide the assessments and get regulatory approvals from not one but several agencies, that means the delays are causing us to continue to operate facilites that should have already been replaced by more modern and efficent ones that have lower GHG emissions. The environmental movement has hindered our evolutionary cycling of technology by delaying its implementation. We cannot build a cleaner power station, if we cannot ACTUALLY BUILD a cleaner power station. Yes power stations are being built but not at the rate to match demand + attrition, lots of information at the DOE website if you want figures. Is this available as a .pdf? I’d like to send it to a bunch of greens. (if only to annoy them) The points are all well made. If solar were included, the numbers would be even worse. Sure it works, some of the time, with massive subsidy. If more people installed it, it would bankrupt the country (assuming we are not already there) In the U.S. we do have better terrain for water storage, more hills and mountains, but usually in places that folks wouldn’t want man made lakes. And places with dams (Lake Shasta for example) generally don’t have much excess energy to pump the water back up the hill. Without reprocessing, I’m not a fan of nuclear. France has shown that it can be done. If we could get Congress and the NRC off their duffs and approve newer designs with a way to deal with spent fuel, we could have as much energy as we need for as long as we want for a reasonable price. For some reason, I don’t expect this will happen. When I lived in Reno, Nevada, I had a friend that was building a house on a fairly large piece of land. To get commercial power to his house was going to cost him about $30K for the power lines. On the other hand, a complete subsidized solar panel/battery system with a 100% diesel generator back up was also going to cost him $30K. It was sized so that he would not have to change his lifestyle (i.e. read by candle light at night!) One of the key pieces in his decision process was that he could sell his “green credits” to a utility in California (I believe it was PG&E). Every month the utility was going to take money from their rate payers and give it to him to subsidize his life style. He was quite pleased with himself. He considered the subsidy a kind of “stupid tax” on Californians that would not only tolerate it but demand it. Some people are going to make a lot of money (it has been reported that Al Gore has already made some $100 million on this) and some people are going to be a lot poorer. I predict that in twenty years there is going to be a vast array of failed “green power” projects (think dead wind turbines, abandoned because they are no longer subsidized). Much like the energy projects started during the Carter War on Energy that ended up being monuments to fools. After all, if you don’t learn the lessons of history, you are condemned to repeat them. I hate to pick on California, but…. I recall reading a story about a geo-thermal project started by then-Governor Jerry Brown (affectionately known as Governor MoonBeam). It was up in the Geyserville area. It seems that the state built this plant with much fan fare about “green power” saving the planet, etc. When they got the plant completed, they found out that there was no geothermal steam supply in the area. So they shuttered it. Later, it was sold for pennies on the dollar to someone that would build a pipeline to a steam source. (It was years ago that I read this story, so I may have it wrong, maybe very wrong. After all, I am getting really old and I welcome correction.) Regards, Steamboat Jack I’m a hybrid. I’ve ranched for 40 years in the mountainous region of western USA. I’ve logged, mined and farmed over the years. I produce my own organically grown food and meat. I am a staunch conservationist (greenie?). I survived both the Viet Nam war and Haight-Ashbury. I’m on the local watershed council, forest council and I am a hearty advocate of sustainable management of resources. Both of these councils are made up of industry and conservation interests. We have found common ground to work together for our various motives. On our forest council we have been addressing the aftermath of destructive logging practices of the post WW II – 1980’s era. Clear cutting, herbicide spraying and the persistent noxious weed problems have left their toll on the watersheds. Recently we have been enthusiastically pursuing the conversion of the excess biomass of 585,000 acres that have been clear cut since the war into biofuel and biochar through fast pyrolysis. There are millions of tons of cellulosic material in the reproduction zones of public lands out here. These tree stands are so overstocked that they represent a catastrophic fire hazard of proportions hard to imagine. They are even aged stands loaded with ladder fuels that sterilize the earth when they ignite. The cost of producing these acres of biomass is a mere fraction of what it takes to produce corn to create ethanol. Congress signed into law that biomass from public lands can not be used to create alternative energies (Biofuel Incentives and the Energy Title of the 2007 Farm Bill). Why? Richard111 (05:22:42) With regards to the Pembroke plant. With a bit of imagination its emissions will be English. Greenhouse gas emission trends and projections in Europe 2008 from the European evironmental agency I guess You will apologize iff I submit this in French, but I’m not enough fluent in English, in a matter that is not easy even in French. Si je suis d’accord sur les objections soulevées dans l’article, il en est une qui n’est jamais évoquée et qui à mon avis est rédhibitoire pour la filière éolienne, et dans une moindre mesure photovoltaïque. Pour se brancher au réseau, les éoliennes doivent être parfaitement synchrones avec celui-ci.Tout qui a un jour été à la barre d’un voilier sait bien que le vent est variable en force et direction en permanence,et que sans cesse il est nécessaire d’adapter la tension dans les écoutes et dans la barre.Pour une éolienne, le problème est encore plus complexe:la plus infime variation de vitesse de rotation, de par la démultiplication,engendre immédiatement le déphasage et le découplage.De plus, de par l’inertie des énormes pales, l’éolienne va avoir tendance à accélérer ou à ralentir avant même qu’une correction du “pitch” ou de l’orientation soit effectuée.Pour palier à ce problème sinon insoluble, les éoliennes sont équipées de générateurs asynchrones,branchés directement sur le réseau.Ceci permet de tourner en permanence à la vitesse de consigne du réseau, et de débiter lorsque l’éolienne accélère.Vous remarquerez que dans un champ d’éoliennes, elles sont toutes parfaitement synchronisées, alors que sur l’étendue, des conditions de vents très variables sont inévitables.Autre avantage, l’éolienne qui est mise à la vitesse de consigne,par le courant pris sur le réseau, n’a pas à vaincre l’inertie du système pour se trouver en conditions de production, profitant de la moindre bourrasque.Inconvénient:le courant généré est en avance de phase.Si la production est minime,cela n’a pas d’inconvénients.Par contre, dès que la production devient importante,ce courant déphasé va faire chuter la tension ,aggravant une éventuelle charge limite du réseau(En Allemagne, ce phénomène prend des proportions telles, que l’on a vu des lignes haute tension branchées à des champs d’éoliennes, chauffer au point de voir les cables traîner par terre)Il n’est pas douteux que la grande panne de 2003 vient de ce problème non résolu.On comprend donc aisément que dans l’état de la technique,il devient périlleux d’aligner une trop forte proportion de courant généré à partir d’éoliennes.On avait bien imaginer de faire produire du courant continu, pour éviter ce problème, avec des onduleurs en aval mais alors on débouche sur un autre problème qui est celui du photovoltaïque,à savoir que les courants ondulés à partir de “hacheurs” électroniques n’est pas sinusoïdal comme le réseau.A nouveau, la mise sur le réseau de courants pulsés à divers endroits de celui-ci peut engendrer des phénomènes de résonances difficiles à maîtriser dès que la puissance en ligne devient significative. Comme on le voit,ces filières sont loin d’être suffisamment au point que pour espérer prendre le relais de centrales classiques.Il n’y a à mon avis aucune autre raison pour que l’Allemagne ait arrêté son programme éolien et ait mis en chantier 15 centrales classiques. However, it is my belief that this sublime day-dream actually holds the seeds for our economic decline and for social disorder on an unprecedented scale. Why? Because no technical and industrial society can maintain itself on unreliable and intermittent power supplies. Ralph Ellis clearly belongs to the reality-based community. I can’t help but remember this quote from Ron Suskind in the New York Times 5 years ago. It’s about the faith-based Bush presidency, but it may as well be about a faith-based environmental.” Nothing has changed. Nothing at all. Looking at these large scale systems and estimates of cost often lead the greens to argue the estimates are false. On a smaller scale, a neighbor of mine installed a solar system. I was interested, and asked for a tour. He explained the system had cut his power costs by 1/3, roughly $75/month in this area. Cost of his system? $23,000 dollars. I’ve seen different estimates, but world coal reserves I believe exceed 1000 Gigatons. Our Oil shale reserves I’ve seen estimated equivalent to 2 Trillion barrels of oil and Canada has another 2-3 Trillion. A recent post on this site listed natural gas supplies sufficient to last 100 years at the current rate of consumption, and noted that exploration has already invalidated claims of limited supply from 20 years ago. Eventually, these sources will expire. But it appears we have sufficient supply to continue unimpeded for the next 50 years while we spin up Nuclear resources, and improve other technologies. But the risk I see, also noted in this post, is that our so called leaders are making the choice first to limit supply of existing energies, while floating in a dream world of ‘renewables’. I see rolling outages and rationing leading to social unrest if this lunacy isn’t stopped. ralph ellis- Great post. You responded to another comment with- “>.” The technology to do this is actually starting to roll out now. Advanced Metering Infrastructure (AMI) includes new electronic electricity (or gas or water) meters placed at every customer premise. They communicate in both directions to allow remote reading of energy use, keep track of when energy is being used to allow time-of-use billing, and permit remote controlled disconnect of power. The EMS software database (Energy Management System) can be configured in a myriad of ways. So here is how to make a substantive connection between an eco-consumer and his/her beloved eco-energy source. The customer is offered a choice of energy sources, either conventional fuel mix or renewables. The renewables would include solar and wind. For the UK, it would be 99% wind, because solar is a big-time loser at your latitude and micro-climate. The energy charges for the customer are directly tied to the cost of purchasing electricity from the energy source selected by the customer. This is a simple software database option. The true cost difference would be in the range of a factor of 3 – 5, depending on where the customer is located and what wind resource is deployed (off-shore vs. on-shore). There would be no sharing of renewable energy costs across the entire customer base to hide the real costs of renewables. Eco-consumers who believe that renewables offer ‘free energy’ will jump at the chance to pay ‘nothing’. Now for the best part. The customer who chooses renewables will be disconnected from the grid in synchrony with the energy delivery from their energy source chosen. For example, choosing renewables in the UK would result in your meter disconnecting you from the grid anywhere from 0 to 20 times a day, at random times. This again is a simple database link between the EMS software and the SCADA (Supervisory Control and Data Acquisition) software that monitors generation across the power grid in real-time (10 second updates usually). Utilities call this load-shedding to maintain system balance (usually only used in emergencies). We could rename it Renewable Energy Advanced Management of Eco-Demand (REAMED). In this way, the green customer can experience the ‘advantages’ of renewable energy in its full glory. They are exposed to the real busbar costs under realistic depreciation rates, and they are REAMED by the variability of a wind-powered resource. Of course, the customer can always switch back to reliable, less expensive electricity with a simple phone call or website click. I believe the UK just announced a country-wide initiative to install AMI/smart grid meters in every customer location over the next several years, with the feel-good goal of reducing electricity demand through in-home displays that show real-time electricity pricing. In a few years, all of the tools needed for this brave new world of realistic renewable energy pricing will be in place. As an aside, I live in sunny Florida and price solar PV every few months for my home. Even here, and with generous federal and state incentives, solar PV will still almost triple my monthly electricity bill. Thankfully, Florida has some of the worst wind resources in the country, so wind really hasn’t gained a footing. Florida is also trying to build 4 new nuclear reactors, but there is some opposition. Any problems with nuclear waste? Drop it down a plate subduction zone, one-way ticket to hell. Carbon sequestration? Shove it into market greenhouses and pay for it with the harvest, keep and dry the rest of the plant if you must ‘sequester carbon’. Frankly given the uselessness of wind now it makes sense to hydrolysize to hydrogen at the farm. We have gas storage and pipelines, so though initially inefficient transport and storage of hydrogen may be the way to use the damtings. I thought that this article was new but it seems it was written in some 5 years ago in June 2004? What an excellent article! Ralph Ellis, you have done magnificently! Every person on this planet who uses electricity should read and digest this. It is a tragedy of our times that those who, by some wangle or another, have wormed themselves into power to usurp charge of our destinies, will make it their business to totally disregard the stark facts of this exceptional presentation. Mike McMillan (02:16:38) : .” I want to preface the following by noting that I am a cautious advocate of nuclear power, but Chernobyl was worse than most suspect. Certainly worse than the Soviets reported. The following are excerpts from a paper I wrote on Ukrainian foreign policy at the Naval Postgraduate School in 1997 (so the present tense used is decidedly past tense now). These excerpts cover an overview of the explosion, the immediate and long-term health effects. ……… During the course of the Soviet Union’s ambitious nuclear power program, the Soviets built twenty-eight nuclear reactors of a particular design known as the RBMK. The RBMK is a graphite-moderated, uranium-fueled reactor that was known to have serious, inherent, design flaws and it is widely accepted that the RBMK would never have been certified by a western safety inspector. The V. I. Lenin Nuclear Power Plant at Chernobyl, Ukraine, is a massive complex housing four separate reactor units—each a second-generation RBMK reactor. Although by western standards the RBMKs are considered dangerous (the RBMK reactors lack secondary containment structures and were notoriously unstable at low power levels), the safety record of the reactors at the V. I. Lenin Nuclear Power Plant was unremarkable in any sense by Soviet standards. The Soviet engineers and technicians were not unaware of the potential problems with the reactor design, but universal acceptance of the safety of nuclear power by the Soviet engineers seems to have caused widespread complacency at the complex. In the morning hours of April 26, 1986, the Chernobyl technicians at Unit 4 began a series of safety drills designed to study low power operations. In contravention of their standard operating procedures, the technicians purposely disabled safety mechanisms on the RBMK reactor prior to beginning the drills. A series of operator miscalculations compounded by poor engineering and pure bad luck culminated in a devastating explosion at Unit 4 of the V. I. Lenin Nuclear Power Plant at 1:23 in the morning of April 26, 1986. Immediately following the explosion on the morning of April 26, Soviet fire fighters attempted to put out several fires that were burning out of control. Simultaneously, as the Chernobyl workers realized that the reactor was open to the elements, attempts were made to extinguish the principal fire, which was believed to be the reactor core. Over 5,000 metric tons of sand, clay, dolomite and lead were dropped by helicopter (the majority of the helicopter pilots later died from radiation poisoning) on what was believed to be the reactor core. Most scientists believe today that Soviet efforts to extinguish the core over a ten-day period actually missed the main reactor core and were concentrating instead on a core fragment. The belief at the time was that the core had been ejected from the reactor chamber by the steam explosion, when today it appears that about 71 percent of the core remained in the reactor shaft uncovered and open to the sky. Consequently, far more radiation (estimated at three times more cesium 137) escaped into the atmosphere than had been initially reported. What happened with the remaining fuel? Sich states, “Eventually, the fuel melted through the reactor’s lower lid and flowed into the lower regions of the reactor building, where it cooled and hardened into lava-like substances.” Over the course of 10 days, the core released radioactivity unabated into the atmosphere until it cooled by itself. Out of the original 190 tons of uranium oxide fuel, somewhere between 10 and 50 tons remain unaccounted for today. Although Unit 4 is entirely encased in a concrete sarcophagus, the sarcophagus itself is decaying and largely open to the elements and the missing fuel, which is presumed to be underneath the reactor, still poses a grave environmental threat. Rainwater collecting in the bowels of the reactor level could theoretically cause remaining fuel to go critical and the meltdown would resume. At least twice since 1986 have neutron flux readings indicated a build-up towards a self-sustained nuclear fission was possibly beginning. Although the key to solving this issue is to locate the missing fuel and pump out excess water, the high radiation levels within the sarcophagus are extremely dangerous for search crews and Chernobyl officials must first find storage for the highly irradiated water. …….., in the May 1996 edition of the Bulletin of Atomic Scientists,. ….. The question of the long-term health effects of Chernobyl is also. ……… >>However, I can’t get on board with the fission and >>particularly the breeder idea. >>Chernobyl made a huge mess and put millions of people at risk. Two comments that have come up and demonstrate that the Green anti-nuclear agenda is still bearing fruit. Firstly, it should be reiterated that Chernobyl should never have been built in the first place. It was done on the cheap, because that is the Russian way; without a containment vessel, without a fail-safe design and without adequate instrumentation and controls. In addition, the operators were playing around, rather than running a professional operation (another Russian trait from my experience in aviation). Despite all this, the actual damage caused by the Chernobyl incident was much smaller than at first feared. As usual, the Liberal press try to demonise this incident: But the WHO says that 50 died and ‘up to’ 4000 people may suffer a premature death due cancer. Each cancer case is tragic in each individual case, but in total it was not an absolute social calamity. . . But what is often forgotten, in all of this fear of nuclear power, is that energy is a powerful substance and the use of energy is always going to be dangerous – whether it is coal, oil, gas or nuclear. If Chernobyl had decimated two entire schools, would we ever have been able to forget about it? So why do we forget about a coal or an oil disaster? And oil workers: And this is the blood upon which all America’s and Europe’s current wave of cheap imports is founded. Around 6,000 deaths each and every year, and not a Greenpeace complaint in sight. Here are some more incidents, just for illustration: . The bottom line is that we cannot run an ever expanding, ever richer society, on nineteenth century energy supplies. We need to learn how to control and police the use of new energy sources in a responsible manner; and NO amount of technological regression back to the Stone Age will ever un-invent nuclear weapons capabilities. So hiding behind a cloak of quaint rural poverty will not protect anyone from a rogue mullah in Iran with a death wish. . You know, when I was a lad, we used to look forwards. President Kennedy said ‘let’s go to the Moon’, just for the hell of it, and we did. The Brits (and French) said, ‘lets travel at supersonic speeds’, and we did. We all said ‘let’s develop nuclear power’, and we did. By the time I was middle aged, it was certain that we would be on Mars and beyond. This was a forward-looking, can-do generation. But what have we now? You cannot fly at supersonic speeds, as that may kill a crested newt. You cannot go to the Moon, as we have to give food to Africans who have failed to feed themselves for decades. You cannot have nuclear power as it might be dangerous, and we cannot have anything that might disturb the cotton wool around our children. We now have a regressive, can-not generation. But I am still with President Kennedy. If we are ever going to succeed as a species, we need to progress, to overcome the challenges that lie ahead, and provide a better world for future generations. And that better future cannot lie in some idealistic, rural nirvana from a Green fantasy resembling Frodo Baggins’ Shires. To make that scenario work, we would have to extinguish 98% of the world’s population. No, the future lies in expansion throughout our Solar System, and you are not going to achieve that with a few dozen windelecs (wind turbines) that run on Tuesdays and Fridays. Can we refrain from branding progress as a sin? Can we banish the Technological Taliban? . An someone who works regularly with environmentalists , I find the term “Technological Taliban” both accurate and appropriate. And 5 years old? Wow, where was Ralph hiding this gem? The Law of Unintended Consequences, as applied to wind farms!. 1. Renewable power works quite well, and provides more than 13 percent of all power sold in California. This 13 percent includes solar, wind, geothermal, small hydroelectric, and various forms of methane production or capture from organic materials. These include landfill gas (LFG) and methane gas from cattle manure. Also, combustion of municipal solid waste contributes to the renewable energy portfolio. The 13 percent figure increases to 23 percent when large hydroelectric is included. (figures for 2008 in California) Wind, solar, wave, and tidal power are truly intermittent, but the other forms of renewable are much more reliable. These include hydroelectric, geothermal, the bio-gas forms, and municipal solid waste. 2. ESS systems that work include batteries, ultra-capacitors, pumped storage hydroelectric, compressed air energy storage, superconductors, high-speed flywheels, high pressure hydraulics, and thermal storage. Each has its place, depending on many factors such as the local terrain, and quantity of storage desired. One can read more on ESS here: 3. Nuclear power is frequently (and wrongly) held forth as the path forward, while dismissing renewable energy sources as a waste of money and effort. The fact is that nuclear power plants are extremely expensive to construct and decommission, yet rather inexpensive to operate. The misleading nuclear proponents claim that nuclear power only costs 1.7 cents per kwh, and thus all power plants should be nuclear. This figure only includes fuel, labor, and maintenance, but conveniently ignores the huge costs for initial capital and decommissioning. That mis-direction is analogous to a person buying a $200,000 Mercedes Benz automobile, and having access to gasoline at 50 cents per gallon. When asked for their cost of transportation per mile driven, they conveniently ignore the huge monthly payment to the bank for the car loan, and gleefully tell you that their car costs only 2 or 3 cents per mile. My recent analysis of EIA data for nuclear power plants and the cost of electricity in the U.S.’ various states with nuclear power shows that cost of electricity increases as the percent of power derived from nuclear increases. There is a direct correlation. Also, that analysis was performed with old nuclear plants that are to a large extent already depreciated or paid off. If new nuclear power plants were to be built and included, the costs would be much higher. and scroll down to the phrase, “is nuclear power affordable.” If we think the environmental impact from coal and oil is interesting, wait till we see the impact on weather patterns and then the life it supports, as we stagnate the wind; the impact on sea life and then human life, as we stagnate the tides; not to mention the footprint and micro and small life biology that supports the rest of life, as we cover the earth with wind and solar farms. While these may fill a minor niche, if we build it big enough to supply reasonable comfort to human life, it is going to have a much greater and destructive impact than we now have… and at least CO2 encourages plant growth for a needed increase of food production, which in turn provides a needed increase in oxygen production! I can’t imagine what the mind set must be for those who would destroy a mechanism for increasing food and oxygen production? And, especially as we learn that there is little if any effect on global temperature! There is no doubt in my mind that our present simple minded ideas of sequestering CO2 will come back to haunt us, either from starvation, or similar to the gasses and pollution from garbage dumps, etc., ha! While we may want to work toward better control of our byproducts, to simply trade one killing zone for another, for the benefit of another big business, and send the rest of us and our great grand children, back to the stone ages, is even more ludicrous! Stephen This is not really related but could be if we talk about biomass (which is also part of the alternative energy family)… Vine growth and survival might be one of the greatest proxy to follow the planet warming/cooling phases. As we all know, there was a time during the Medieval period when they could grow vines in Northern Europe. A good idea of the trade route at the time can be reviewed in “Wine and the vine: an historical geography of viticulture and the wine trade By Tim Unwin, P. T. H. Unwin”. When the cold climate came back, no more vines could be grown in Northern Europe. Throughout history vines have followed the general cimate of the earth. Today there is an article that was published here in BC that sheds more light on the vine reaction to climate: It’s a very interesting article that shows the importance of the PDO and global climate and what has been seen here for the last 50 years. Years ago I read the the costs for a power company was about 1/3 for fuel, 1/3 for distribution, and 1/3 for power plants. Since the renewable sources are not continuous, the costs for the power plants essentially doubles. Now if you add enough short term storage to allow the normal plants to shut down when the wind is blowing, the costs for the power plants will triple. If you keep a spinning reserve of conventional plants the whole idea is pointless as there is little reduction in the CO2 plant food emissions. Land use at least triples now that we are beginning to recognize that the subsonic pressure waves from the wind turbines means that they cannot be located closer than about 1 km from residences. There will also be an increase in distribution costs to bring in the energy from the low density sources. The free renewable energy costs at least twice as much while industry moves to China so no one has a job to pay for it. Tax revenues decrease so the government will have to raise taxes to pay its electric bill. The only project that I can see worth doing would be solar powered air conditioning. Solar energy would be available when it is most needed and it would reduce the peak demand on the conventional plants. Efficiency requirements would demand the the use of a thermal system so no electricity would be generated. The whole debate changed this morning when the sun rose in the west over North Korea.Iran will follow shortly,Global Warming will fade in the cloud of fallout.This is something we have to deal with now.Millions of lives-possibly our own are at stake. Life or Death. which is it? The Dictators,Dingbats, and Tyrants of the world rejoice…. It’s not about science. It’s not about logic. It’s all about politics. I tried to get people over at solarcycle24.com to write their representatives. Except for a few trolls and semi-trolls, these are mostly anti-AGW guys who can write non-stop about all kinds of stuff, day after day, and not stupidly, either. I think I counted 5 or 6 people who actually wrote their representative. While I was in aerospace in ancient days (Nike Zeus, Thor Agena, Skybolt, Saturn S-IVB), I did a motive systems study. I looked at everything in the way of energy sources: Capacitors. Springs. Batteries. Gas. Fuel cells. Flywheels. Solar. Wind. Broken wind. Everything. My conclusions? (1) Nothing beats gasoline for propelling vehicles. Only diesel comes anywhere close; alcohol and LPG are way down the list. (2) Our biggest need is not new or renewable sources; our biggest need is a dense, inexpensive storage system. What we have is dense politicians, instead. Always remember: liberals, like alcoholics, judge themselves by their intentions, not by their results. Ralph Ellis, My comments about Chernobyl had nothing to do with a “green agenda.” My family was personally affected by Chernobyl, and everyone familiar with the details knows that it was a huge mess. There is no comparison between the dangers of nuclear proliferation and natural gas explosions. Coal fired plants work just fine, particularly since they were cleaned up in the 1960s. John Egan (06:26:05) : There’s even LESS extractable uranium than oil. You can do it with other lighter materials. See: If nucleus are close enough probability increases. jorgekafkazar (10:33:12) :’m optimistic that our current renewable energy psychosis will turn out to be a temporary disorder. Those of us (science nerds?) who follow this issue with at least some basic understanding are in the small minority. My observation is that the vast majority of folks out here in Mainstreet USA are not interested in this issue and spend very little time thinking about it. They certainly don’t understand the real consequences of a major switch to renewables away from conventional generation. When they express support for renewables in various polls, they do so only with the assumption that it will not affect their own lives significantly. Give them an accurate picture of a renewable energy future and then see what the polls show! If the currently proposed energy policies ever actually get implemented (and I don’t believe they ever will on a major scale), the effect on cost and reliability will open the public’s eyes in a hurry. When that awakening occurs, the voters will quickly throw out the current management and this renewable energy nonsense will come to a screeching halt. The Indian says “…there is much foolishness in the world, the most natural forces to harness are the Winds and the Seas. Let us feel their force and use what they offer” Roger Sowell The reason the Construction and decomissioning costs are so high are because of the ENVIRONMENTAL REGULATIONS compliance which costs huge amounts of extra capital. Approve a nuclear waste storage facility so operating reactors can get rid of the spent fuel they are storing on-site now, that will reduce decommissioning and operating costs dramatically. New construction techniques have brought actual engineering costs way down because the current family of reactors are not untested technology. For example the Canadian Candu reactors, there are 42 operating on the planet now, the cost is about 1.3 Billion USD for a 1.0 Gw unit and decomissioning is costed at 10% of Construction Cost. Last reactor build in China was 54 weeks from first pour to 100% power, NO RED ( or do communists have blue?) TAPE. No work stoppages every month for environmental assessments as to the impact of concrete pours on the butterflies. Siting also did not add 5-7 years of work to the engineering costs. I realize the environment is important but the regulations are a cruel joke, it is not just the US it is Canada (to build a Candu in Canada is almost impossible these days because the friggin hemp hats go ballisitic) and in many EU countries. The logic used to argue against a proven technology that solves the perceived problem and has solutions for all the downside arguments is beyond any sort of defense, it is simply irrational to have part of the solution and not implement it. Shr_Nfr (08:18:44) :. Except of course with its very wide combustion limits you can burn it under conditions which produce less NOx, also since you would not need a three-way catalyst the exhaust can be cleaned up very effectively.. Still renewables are at the lowest percent of electricity generation since at least 2004. They peaked at 30% in 2006. Arthur il devient périlleux d’aligner une trop forte proportion de courant généré à partir d’éoliennes It is not only dangerous but foolish and costly. tipiheaven (11:00:25) Let us feel their force and use what they offer Then make a drum dance to see if you can generate electricity :-) Roger Sowell (10:14:00) : “…quite a number of energy storage systems (ESS) exist and work quite well; their drawback is one of economics, not practicality…” Economic failure IS a practical drawback and, in fact, the most fundamental one. “.? Consider, too, that much of the expense of nuclear has historically been the result of deliberate green interference in the licensing and construction permitting process. Total project lead-time is a large factor in out-of-control nuclear capital costs, one that needs to be addressed. I visit this website daily and look forward to its postings, but can someone define who Steven Goddard is ? I’ve asked this a number of times with no answer given by anyone. We read many good blogs by him without any understanding as to if he a writer, scientist, fellow researcher ??? “The climate crisis, economic crisis and energy security concerns will begin to unravel if we start a shift away from expensive, vulnerable and polluting carbon-based fuels,” former U.S. vice-president and campaigner Al Gore told the conference. What a bunch of [self-snip]. Considering that standard hydrocarbons and hydrocarbon derivatives are still cheaper than synthetic or wind/solar alternative, where does he get his numbers? Let me guess… from his [self-snip]. If you can bare the pain, read the rest at: Greg (07:59:15) : Greg you missed the point. We are NOT “running out of oil”. The critical issue is “LIGHT oil”. There is about three or four times as much heavy oil, bitumen etc. as there is “light oil”, plus there is coal. However, show me how you run your vehicle on TAR or COAL? The fuel needs to be extracted, then upgraded, then converted into syncrude – and then refined. Adding capacity costs $100,000 /bbl/day. To replace 100 million bbl/day will “only” cost $10 trillion. More importantly, this WILL NOT HAPPEN OVERNIGHT. US peaked in 1970. Oil imports have been steadily increasing. These are the issues that must be dealt with. Cut transport fuel in half – you will cut the economy in half overnight. It takes a generation to transition. mkurbo, I’m a veteran scientist/engineer/environmentalist with degrees in science and engineering. I am not a climate scientist. Hi Ralph Ellis Thanks for your reply and the links concerning Denmark. Will make an interesting read. As for another question of mine you answer it as follows: >. Exactly this is one of the merits of the market based approach. It clearly illustrates to the customer if a product is not efficient/reliable. However it could work with storage like how salt is utilized for the large solar plants around Las Vegas. Again at an extra cost but if there is a demand it should be possible to provide a supply is all I am arguing. The present energy system is flawed as an economic market resulting in low differentiation and lack of innovation. Also as Chris Y relates the technology exists for advanced energy registration and management. Not saying it is simple or easy to implement an efficient energy market. Merely pointing out how it would relieve ‘society’ from some of the influence of the green lobbies. Um, your essay would have more credence if you would have gotten the source of the tides correctly–they don’t depend on the moon’s orbit, they depend on the Earth’s rotation, moving under the moon’s position in it’s orbit. Thus there are two tidal cycles per day, high tides under the moon and opposite the moon (to within the amount each tide is pushed forward by friction with the Earth’s crust, usually about 1 hour, and moving back as the moon slowly orbits the Earth, which makes cycles closer to 11 hours than 12). Agree about what you’re saying, just don’t let factual errors kill your credibility. There is no “right” to survive, only a need to survive. Ray (10:55:28) : “jorgekafkazar: agree, more efficient motors would be nice, but it ain’t gonna happen, much. There are fundamental thermodynamic limitations to the Otto cycle and the Diesel cycle. Also, typically, lower energy density fuels, such as alcohol, yield lower mileage. There are also times when fuel density is very important. Example: A friend sent me an article on a new, very efficient fuel cell. It sounded wonderful, but a few minutes of math revealed that this marvel would require a 600 gallon fuel tank to obtain the same range as a standard gasoline-powered car, not counting the loss in mileage due to lugging the fuel around. So much for performance. The typical 21st century car is going to look less like a car and more like a cross between a Formula Junior and a bicycle. At that scale, gas tank size is very important, since you don’t want it in your lap when you get hit by Arnold’s Hummer or Al Gore’s limo. Greg See the best actual public data IEA WEO 2008 – World Oil Forecasts using Wikipedia Megaprojects, Dec 2008 How do you propose to keep our economy going as LIGHT oil declines? @ adoucette (11:36:26) : ” word power does refer to electricity. As to the renewable figures you stated, you are quite wrong, sir. The figure is 23 percent, please see the link I referenced. Hydro is right at 10 percent, which is less than half of 23. .” Are you picking cherries? The wind generation increased more than 33 percent since 2001. So the production decreased 6 percent (I will accept your figure for argument’s sake only), when demand decreases (due to decreased economic activity), one would expect production to decrease. “Still renewables are at the lowest percent of electricity generation since at least 2004. They peaked at 30% in 2006.” And your point would be…..what? California is taking advantage of the natural resources available, including abundant sunshine, a bit of wind, a generous portion of geothermal, substantial hydroelectric, and great bio-gas opportunities. California also mandated high-efficiency power from fossil-fuel sources, which effectively rules out any power from coal. Thus, the coal-based imports will be phased out and replaced by either renewables or natural gas. Steven Goddard (5:39) is making the usual mistake of conflating nuclear power with nuclear weapons. If countries like North Korea or Iran insist on trying to make nuclear weapons, the absence of civilian nuclear generation facilities will in no way hinder them from doing so, just as their existence under safeguard in no way aids them. It may come as a little surprise to Steven and his ilk, but nuclear weapons programs are always dedicated military enterprises that have nothing to do with civilian activity. I think there are several issues here, the first being that there are lots of countries selling us energy-oil and gas primarily-who don’t like our way of life, which makes us very vulnerable to political pressures. in my opinon, security of supplies is a much bigger problem than agw, and is a compelling reason to develop other energy resources that is quite separate to any other arguement. We also have the considerable irony that it is the transfer of such large amounts of our money to unfriendly countries that will help them to threaten us. Secondly, that each country needs to use what is most appropriate in their circumstances. The UK sits atop vast coal seams said to be sufficient for 300 years and being an island is washed by strong tides and waves. Yet we won’t use coal and research into waves/tides is minimal. Water based resources are at least 10 years behind wind technology-which I would consider an unreliable embryonic system at present. Our politicians marched against nuclear in their younger days and pay lip service to new stations.They also hate coal, but as the UK has an ageing power generation system it is difficult to see how renewables-which by their nature are intermittent- can possibly fulfil the role of providing a reliable base power source. A useful top up yes-our main source , no. We need a large number of power stations that can produce a quantifiable amount of power that can be calculated and relied on. I have no hang ups about nuclear or coal but accept in the real world we will see very little new capacity in the next 20 years from either source. Personally I would use our coal reserves on the basis this gives us a breathing space. I would also allocate far more resources to tidal/wave and as a matter of urgency look into other possibilities, be it hydrogen or some other resource. One potential source that wasnt mentioned in the aricle was thermal reach. This exploits the considerable temperature difference betwen the top of the ocean and the layers lower down. This temperature difference is easier to exploit in countries such as the UK which has a broadly suitable ocean temperature profile, although conversely much of the oceans around us are relatively shallow, which might limit the amount of suitable locations. I have heard of a few small projects looking at thermal reach-I dont know if anyone here has any more direct experience of the pros and cons of it than I do. Tonyb Gianfranco (02:43:38) : “Nuclear waste…” Gianfranco, this type of flaccid remark without elaboration is what is wrong with the whole issue of global warming and energy source (non)debate. These little “sound bytes” are what are trundled out in the midst of serious discussion and are the green picture that is to mean a thousand words. Bury the waste deep in a granite batholith. Someone posted above said drop into a tectonic subduction zone, save it up and shoot it at the sun while we are on some other mission. One of the best kept secrets by dimension stone industry is that your average red granite that is sitting out there for all to see and walk over is a uranium ore at about $150/lb U3O8. Take a geiger counter with you when you visit a building clad in red granite. JamesG (04:22:04) : “use CO2 for green houses? pressurizing oil feilds?” James, we are talking about 10 trillion cubic metres of CO2 a year – in 15 years it will be more than measured natural gas reserves (see my post above: Gary Pearse (06:06:08) : on sequestration and disaster) John Egan (06:26:05) : “shortage of uranium…” There is no shortage of U – there has been a shortage of exploration for it for 40yrs because of anti nuclear. At the price you mention, mining companies were finding it all over the place. Even red granites are U ore at 150 bucks a pound and every phospate plant can recover substantial quantities of it. “>>Obviously there is a demand for ‘green’-energy. Well then >>let people be able to sign up for ‘expensive’ power and pay >>twice the price for their electricity. We did something like that in California a few years back. It was called deregulation, I believe. I remember a form coming in the mail that gave a choice of buying electricity from a ‘green’ company (hydro-generated) or conventional (coal/oil-fired) source. Shortly thereafter, rolling brown-outs and blackouts occurred, the state budget reserve was spent to purchase powerlines, and the governor was kicked out of office. Now, the new governor has ‘gone green’ , we have +10% unemployment and the state government is essentially bankrupt, operating on borrowed funds. Watch what happens here in CA, it will be coming to a state/country near you soon! RJ @jorgekafkazar (11:40:31) : “Roger Sowell (10:14:00) : “…quite a number of energy storage systems (ESS) exist and work quite well; their drawback is one of economics, not practicality…” Economic failure IS a practical drawback and, in fact, the most fundamental one.” No, primarily large storage is not yet economic. Many companies enjoy small storage systems to great advantage, and find the economics quite attractive. A battery-based UPS (uninterruptible power supply) for computers and sensitive electronics is one example. High-speed flywheels also enjoy good sales. If modern governments supported large-scale ESS in the manner they formerly supported large hydroelectric dams and generation, the picture would be quite different. Just to name a few, I refer to Hoover Dam, Glen Canyon Dam, and Bonneville Dam. The country did not seem to suffer from those instances of government largesse. A large storage system that employs batteries is approved and being installed on Catalina Island offshore California. That is the most economic solution out of roughly half a dozen systems studied. ““.?” Ah, the subsidy-is-bad argument. Literally dozens of things receive federal or state subsidies in one form or another. Are you opposed to all, or just singling out power generation? Nuclear power, as you bring up, is greatly subsidized by federal money. The amount of federal money that pours into regulatory agencies for nuclear is staggering. If, as you advocate, we are to eliminate any power source that receives a subsidy, then nuclear would be the first to go. “Consider, too, that much of the expense of nuclear has historically been the result of deliberate green interference in the licensing and construction permitting process. Total project lead-time is a large factor in out-of-control nuclear capital costs, one that needs to be addressed.” One could say the same about obstruction for natural gas plants, coal-fired plants, large hydroelectric plants, solar and wind plants, and wave power; the greenie-weenies are opposed to everything. In the case of nuclear, there were very good reasons for the changing regulations and inspections. Nuclear power is ultrahazardous by legal definition, a designation not applicable to any other power source. Or, do you advocate that nuclear plants dispense with the containment dome, the spent fuel storage areas, the safety systems, and the requirement to withstand an impact from a large commercial aircraft? Those design standards are there solely because sober and wise men understand that radioactive nuclear fission is deadly, toxic, very long-lasting, and will kill people without those safeguards. With regard to nuclear waste, why did we ever stop dumping it in the ocean? Remember, radioactivity is ubiquitous – the only issue is concentration. We concentrate it to use it (enriching Uranium etc.) because that is the onlyway to get useable energy from it and for disposal, we should simply dilute it again. The UK waste that has got everyone so upset was packaged in such a way as to deliberately leak slowly so that in an ocean trench it would be diluted as it leaked and decayed. Storing it on land – where it still leaked as the drums were designed to – has created a radioactive plume in the subsoil. Hardly an unanticipated occurrence. Imagine the good to be done if a fraction of these funds were directed to clean drinking water in developing nations. Conservative estimates list 4 million deaths per year due to poor drinking water. I had a pleasure to add “technological taleban” to wikipedia greenpeace page. Counting 15 min, nobody took it away yet? And people who are against nuclear should keep remembering: France has done it: and the french are not into wasting money. *. ….. The cost of nuclear-generated electricity fell by 7% from 1998 to 2001 to about EUR 3 cents/kWh, which is very competitive in Europe. The back-end costs (reprocessing, wastes disposal, etc) are fairly small when compared to the total kWh cost, typically about 5%. The whole link is worth reading particularly to help nucleophobes realize that there are working and economical solutions in one of the most civilized countries on the planet. JamesG (03:21:50) : Sellafield is being decommissioned now and the costs are spiraling out of control – the latest estimate being 80 billion. Spurious court cases by the Eco Taliban have brought about the requirement of 10,000 years of sequestration of spent material. But even that’s worth it to have lights, heat and machinery working when we want it, not just when the wind blows. Besides, as others have pointed out, newer reactors are magnitudes more efficient than early reactors and can reprocess fuel. The left has a long history of being utopian. Mark Bowlin (09:21:47) : Thanks for the excerpts. Any chance the entire paper is available? Was Read’s “Ablaze” among your source material? This article, like many others, puts forward the notion that when we run out of oil, modern life as we know it will cease to exist due to our dependence on petrochemicals. That is not accurate. We can make “petro” chemicals from anything containing carbon. That includes coal, trees, algae, and even trash. This is NOT a theoretical. Actone was produced by a bacterium (the method was developed by the person who became one of the first leaders of the new state of Israel). EMN Eastman Chemical is still using coal to make its plastics (they never moved away from coal when the rest of the industry did at the onset of cheap oil). DOW Dow Chemical and other US producers use a lot of Natural Gas and between the quantities recently liberated by new “tight shale” techniques and the future availability of Methane Clathrates from the sea bed; we have natural gas for many centuries. DD DuPont is working with BP British Petroleum to produce butanol, as a fuel but usable as a chemical feedstock, from biomass on a production basis. And RTK Rentech is presently producing agricultural chemicals from trash. There are also a half dozen minor companies using various forms of pyrolysis to make a synthetic oil from plant matter and trash; along with others making oil from algae. We will never run out of “petro”chemicals. The notion is also put forward that we must have fast breeders to make the Uranium fuel cycle last more than a century or so. This, too, is lacking. While I have nothing against fast breeders (or any of the dozens of other similar reactors) we do have choices. Any of them will do (though markets will only choose the cheapest.) One is a Japanese technique (proven at a test scale basis) that produces U from sea water at about $150 / lb. (roughly $300 / kg – though given Yen changes that will vary). Expensive compared to land based mines ($40 / lb to $120 / lb most of the time) but dirt cheap compared to oil. My final comment is about the overall tone of the article. The dour “only one choice” and everything else is broken tone. This is just way too simplistic and leans toward cheerleading for fission. I’m happy with fission. It works well and is economical. It can power the whole planet. Yet there are existence proofs of people living happy MODERN lives with electricity from nothing more than wind and solar. Yes, converting Britain (or any other established modern energy grid) would be a royal PITA, and not the best ECONOMICAL choice; but that does not mean the alternatives do not provide usable energy and are simply trash. Especially in places like the U.S.A. with a continental scale grid, wind can be effectively integrated with little issue up to about 10% and with minor complexity up to about 20%. In addition to pumped water storage, one can use pumped AIR storage in giant underground cavern / tanks (probably not suited to the UK – we have salt domes…) Wave power is not Tidal, and it is much more dispatchable. A significant percentage of wave power could be integrated into the UK grid with few issues to resolve. Similar potential solutions are available for many other ‘alternatives’ and different places have different best mixes to use. With that said: All this would cost a great deal of money. Frankly, putting in place enough secondary batteries and pumped storage and … to make up for wind being flakey on a modest sized northern atlantic island is incredibly wasteful. So you really need to distinguish TECHNICAL solutions from ECONOMIC solutions. On a technical basis, there are many things we can do. There is not energy shortage and there never will be. It is on the economic scale that the battle is fought and the winners chosen. This is where governments go wrong. They choose a technical solution. Markets choose economic solutions. Markets do the better job of integrating all factors. Governments listen to pressure groups and biased agendas. On an economic basis, we can say that a little wind mixes well with a little tidal and a modest amount of wave and in sunny climates a fair amount of solar power. We can also say that coal and fission beat the pants off all the alternatives for massive base load power and for most swing power. Natural gas dominates the grid peaking power (though in some locations Diesel electric does better – i.e. Alaska away from a nat gas line). The point? There Is No One Answer. If varies by nation, by geography, by fuel market, by individual home. (If I’m more than one or two power poles away from the grid it is cheaper to build a wind / solar system with battery storage than to pay $10,000 / pole for them to be installed…) These kinds of mixed product solutions are handled DISMALLY by governments, by commissions (Truth Commissions or otherwise), and by groups of experts reaching a consensus. They are handled very well by a market based system. So we have a town in Alaska where the entire town is on a giant battery backup system. Why? Lose power mid winter, you die. Fast. France is up to their eyeballs in fission having little else. Saudi uses oil to desalt sea water. I have a standby gasoline generator. California has wind farms and geothermal actively contributing to our needs (along with a big pot of fission, hydroelectric and some pumped storage). Iceland uses massive quantities of geothermal because they have it in abundance. The list goes on for many pages. The only major mistake is to look for The One True Answer. Because there is none. The minor mistake is to disparage other energy sources as Not The One True Answer. Nothing is, so everything falls to that axe. So please accept that there are a great many solutions that all can contribute, but not in all places or for all peoples. Step back and ask for each place: What works best here, for them? A nobody need ever be cold and in the dark. Ever. Hi This post is in reply to a statement by Roger Sewell about the high costs of Nuclear, Generation by individual States after his analysis of EIA Data. My analysis of the same data base(EIA over several years) indicate that his conclusion is highly suspect. For example Illinois, receives about 50% of their electricity from Nuclear Power Generation (Exelon Energy/Commonwealth Edison). The retail cost of electricty in Illinois is a bit less than the average of all the states. Hawaii has the highest retail rates ( mostly oil generation), and Idaho the cheapest (hydroelectric). The states with the cheapest rates are mostly powered by coal, whereas the states with the highest rates, have a very strong green movement.. Thanks for this well thought out overview. My belief is there is a place for ALL these energy sources, albeit some clearly must carry the baseload. New nuclear (fast breeder, fusion and thorium) is appears necessary for any successful program to achieve energy independence. The next question is how to reduce the *number* of new nukes and their large investment capital? One suggestion is to expand the use of natural gas in the home to power combined cooling, heat and power units. These systems, already built by several manufacturers including Honda, can generate the necessary heat to cool/heat a home AND produce electrical energy. No, they are not 100 percent efficient but yes, they are a viable alternative to massive grid and power plant expansion. And they use a domestic resource we have in abundance. In keeping with renewable goals, solar can and should continue to expand – especially in sunbelts where there is reasonably predictable sunshine. The Israeli-built solar concentrator project in Cali is one that will be profitable due to mandated contracts with PG&E. Likewise geothermal and tidal/wave energy can make a valuable contribution. The introduction of hybrid auto transportation lowers the demand for foreign oil – a goal both left and right appear to want. Domestic alternative liquid fuels like ethanol, butanol and biodiesel can replace some demand for gasoline – again, not 100 percent but they make a contribution. And a program to investigate new physics should not be limited by political influences. If there are potentially viable low energy nuclear, or Millsean hydrino-type reactions that are repeatable – they should be openly funded. The time to build transitional technology pathways is now. The idea of depleting one resource before unveiling a new one is erroneous. For a viable energy future we need a broad portfolio of resources. Our wisest step will be to dismiss none in favor of another. There is strength in variability. The issues set out in this essay are all too real but they are challenges and opportunities more than they are problems. I’m with Roger Sowell on this one: storage and balancing supply and demand is something we’ve only started to think about commericially. For renewable energy in general, many of the most promising technologies are not yet commercial and even those that are are in their infancy. Regardless of what our energy future holds, renewables have a window of opportunity currently from market incentives that are driving innovation. While I don’t like the incentives (we are all paying for them) or see a need to mitigate carbon emissions, I am keen on an increased diversity of energy technology availablility. It may only take one or two technologies or companies to make a break-through…. Then there are other issues such as cost and security of supply. Ireland (population 6M) is a small customer when it comes to buying fuels. We need to develop and use what resources we have – wind, wave and tidal in abundance, limited hydro (we’d be fine if rain produced energy). We have gas fields under development but we need to keep options open. TonyB – plenty of research into wave/tidal: (plus many more commercial wave ones not shown. Julian Braggins (02:40:32) :Thorium might have got a mention, inherantly stable , on paper, more of the raw material than uranium and less long term waste disposal problems (but we have safe solutions already for uranium) but have yet to see a working prototype. Um, try Thorium is in production today and has been in production since the earliest days of nuclear power… See: Gianfranco (02:43:38) :and the problem of nuclear waste? how to be addressed? I became much less worried about this when it was pointed out to me that the nuclear “waste” was usable as fuel in an advanced reactor design and that the radiation level coming from the wasted dropped to being about the same as the original ORE, not background, in about 250 years. The 25,000 year issue is to match background and that is, IMHO, a false goal… I also put this comment under the ‘infinite energy’ thread, but I’ll excerpt it here: We can treat all our present nuclear “waste” as fuel, should we wish to. This, IMHO, is the biggest reason NOT to entomb “waste” at Yucca Mountain. I’ve added the “bold”. From: HTRs can potentially use thorium-based fuels, such as highly enriched uranium (HEU) with thorium, uranium-233 with thorium, and plutonium with thorium.. So all the hand wringing over nuclear “waste” and all the folks saying we are going to run out of Uranium since we only use some small part of it in a reactor load; are all missing the point. We do that because it’s easy and cheap. We don’t “waste” the large part of the energy left in a “spent” fuel bundle. We’re just saving it for future generations… [snip – religion is OT ] E. M. Smith, Tony B, Roger Sowell, Pragmatic, Ellie in Belfast have it right. Now is the time to be working on, and experimenting with these projects. In the end, rational minds will make practical choices for their locale. In the meantime, nuke some popcorn, grab a “beverage” out of the fridge, and watch the show. It should be “interesting.” Neil Jones, “Welsh Windbags strike again.” Spot on! Maybe, though, we could somehow harness the ‘wind’ that emanates from the Welsh Assembly. :-) Regarding Chernobyl: Mike McMillan (02:16:38) : Steven Goddard (05:58:30) : Mark Bowlin (09:21:47) : We have friends who host one or two teens from Belarus every summer. The local small community hosts about a dozen. More would like to come here because they also get a week of business training at Central Washington University (Ellensburg). But that and other costs also limits the number that can be hosted. Check out: The Children of Chernobyl United States Charitable Fund, Inc Although, as explained by Ralph E., this was not a nuclear explosion and the design was faulty but the world is still dealing with the consequences. The current children were not born yet when this event occurred (1986) and they suffer because of it. Has anyone fully researched this: From what I have been reading…basically all of our energy needs…and I mean all can be directly provided by the earth’s ionosphere. Tesla proved it and its been kept out of sight ever since…. Colin, Iran has hidden their nuclear weapons program behind their “right to nuclear energy.” Iran’s “right to nuclear energy” allowed Russia to openly assist them in this effort. Iran’s “right to nuclear energy” allowed them to develop nuclear weapons right under the UN weapons inspector’s noses. And breeder reactors will allow Pu-239 to be leaked to terrorist organisations. These facts should be obvious to anyone paying attention. I’ve done a quick skim of the thread and I haven’t seen mention of the positive capacity increase reducing waste and improving efficiency can make. Wanna fry an egg? You don´t need anything, just with GLOBAL WARMING take your frying pan outdoors and break an egg on it. …but it will be possible within 5 years (when the north pole will melt also). :-) It will happen if we can (a) get candidates running for office (local and federal) who understand the realities of energy production, climate, and science in general; and (b) educate the voting public to the point where they will vote for these candidates and not socialist demagogues like Waxman and our current President. The duffs that have to be gotten off are our own—or as some Brit once said, “Extradigitate!” /Mr Lynn A technological Taliban? funny indeed…but, why don’t you tell us who? I just was imagining a CLIMATE TALIBAN instead. :-) Another Post in response to Roger Sewell post about Nuclear Generation in France. Here is some background info: Electricity generation is divided into three groups: baseload (24hr a day /365 days a year) intermediate, and peaking. Peakers are dispatched as needed to maintain grid balance because of small unpredictable variations in demand. Intermediates are used because of predictable variations in demand. For example hot afternoons in mid summers(turned on), and late at night( turned off). Baseload, and intermediate account for more than 90% of the electrical production. In the United States, all of the Nuclear Production is baseload. In France most of the Nuke production is baseload, however, some of the production is intermediate. It is not correct to state physically that Nuclear does not have load following capabilities. One may control the output of a nuke by moving one or more control rods into or out of the core. In fact the load following characteristics of Nukes is similar to that of large thermal plants (modern coal fired, and combined cycle gas turbines, operated in the combined cycle mode). The load following of Nuke are economics, not physical. I challenge Mr Roger Sewell to compare the load following of Nukes with the load following of Wind turbine farms(specifically wind generation) in Denmark, and Germany. Wind has zero, or zip load following characteristics. Ellie Thanks for this. I know there are a fair number of reseach projects, the trouble is that the amount of development i.e installations capable of delivering actual electricity into a national grid is miniscule. A significant proportion of those being researched will fall by the way side as robustness plus efficiency plus price plus repeatibility is an equation that -unfortunately- eludes most projects. tonyb Gary Pearse (12:33:16) : There is no shortage of U – there has been a shortage of exploration for it for 40yrs because of anti nuclear. IIRC Lucy and Ricky Ricardo bought themselves a Geiger counter and went exploring for uranium when they went out west. They thought they’d struck it rich, but it turned out Fred Murtz’s watch had a radium dial. Always happy to elevate the level of discourse. Sheikh Yamani, the Saudi Oil Minister during the Oil Crises of the seventies is memorably quoted as saying:- The stone age did not end because the world ran out of stones and the oil age will not end because the world runs out of oil. S It’s not just a topic for debate anymore–it is now law in the United States. On 5/21/09 the Waxman-Markey Bill, i.e., H.R. 2454, “The American Clean Energy and Security Act,” passed in to law by a vote of 33 to 25. It is law now :”This bill, when enacted into law this year… With this plan, we will shape a new energy destiny for our country,… “. The 932 page law on PDF : They are saying it will save people money. How? TonyB, Only too true. If I could invest money in every renewables technology/project I’m aware of and be guaranteed a 10 times return for every one that fails (not that investment works that way of course), I’d be very rich indeed in about 10 years, perhaps less. On the other hand picking a winner would be much more of a lottery. Ha Ha. I just reread what I wrote about investing in failing renewables – I think that shows just how small my ambitions (resources) would be as an investor! Roger Sowell says The amount of federal money that pours into regulatory agencies for nuclear is staggering. If, as you advocate, we are to eliminate any power source that receives a subsidy, then nuclear would be the first to go. ——————————————— On what world does funding FEDERAL REGULATORY AGENCIES equal a subsidy? That was one telling comment. The Nuclear industry is responsible for the size of Government? No left leaning environmentalist anti-nuclear scaremongers are responsible for the money being poured into these agencies. Nothing like some misdirected blame to better illuminate the issue. For the Record I advocate all subsidies to be withdrawn on all energy sources and have the best one win,… it is a radical right wing free-market concept that may be very foreign to some readers. The Waxman-Markey Bill, “This bill, when enacted into law this year…”. 932 page PDF : Carbon Capture and Sequestration… Electric vehicle infrastructure… Building retrofit program… PART A—GLOBAL WARMING POLLUTION REDUCTION GOALS AND TARGETS… Greenhouse gas registry… International offset credits… Requirements for international deforestation reduction program… Climate change rebates… CLIMATE CHANGE WORKER ADJUSTMENT ASSISTANCE… INTERNATIONAL CLIMATE CHANGE ADAPTATION PROGRAM… I have posted this elsewere but it is relevant. As the world is constantly looking for more potable water, power, hydrogen and fuel, I wonder if some of our young engineering geniuses could consider the following. Visualise a High Temperature Pebble Bed Reactor next to a conventional Coal Fired Power Station adjacent to a water cooling source [sea or river]. The HTPBR can very efficiently electrolyse water to hydrogen, supply heat for a distillation desalinisation plant, or power for a reverse osmosis desalinisation plant and also supply power to the grid. The CFPS is operated conventionally to supply power to the grid but the carbon dioxide is scrubbed and retained. A recombiner is set up and the hydrogen and carbon dioxide are recombined over nickel to produce methane which is then compressed and used as a motor fuel. The hydrogen obtained by electrolyse is very pure and also suitable for fuel cell use in vehicles. French NP. In the existing French nuclear fleet, the number of safety-relevant events has increased steadily from 7.1 per reactor per year in 2000 to 10.8 in 2007, …. Wiki In July 2008, 18,000 litres (4,755 Gallons) of Uranium solution containing natural uranium were accidentally released from Tricastin Nuclear Power Center..[7] French authorities have banned the use of water from the Gaffière and Lauzon for drinking and watering of crops. Swimming, water sports and fishing were also banned. This incident has been classified as Level 1 on the International Nuclear Event Scale.[8] Again in July 2008, approximately 100 employees of were exposed to radioactive particles that escaped from a pipe in a reactor that had been shut down.[9] Dennis George (16:03:30) : “Admittedly the carbon dioxide eventually will end up in the atmosphere but it may be easier to capture it at the tail pipe rather than the chimney.” Isn’t a static chimney an easier proposition than a mobile tailpipe? Thanks Ralph Ellis, I think you got it right 5 years ago. Except in very special niche circumstances all these alternative energy sources are toys. Nice for a few people to fiddle with as a hobby. Our technological civilization has lost confidence in itself and is committing suicide slowly. We seem to be doomed to a future of energy poverty while there is abundant energy in the nucleus which we actually know how to extract. Thanks also to Roger Sowell for demonstrating to everybody again that lawyers are obtuse shaders of the truth who are prepared to argue that black is white and have the sheer gall to think that you believe them. . Nuclear plant of the PWR type are automatically load following because increased demand at the heat exchanger causes the temperature of the outflow to fall so cooler recirculated water enters the reactor. The water acts as a moderator and since cooler water is more dense the reaction rate increases to compensate without any other adjustment. Although this quite desirable it can also lead to difficulties particularly at low loads such as ‘cold slug’: which is exactly what happened at Chernobyl. When the degree of moderation changes the response is not linear, it tends to surge upwards and then drop back and if the surge is too great the core goes supercritical. We usually model this response using Fourier transforms. This is why it is dangerous to allow a nuclear reactor’s reaction rate to increase too quickly, for example it is standard practice for the control rods to be wound out by electric motors and a worm gear to retract the rods very slowly, a lesson learned from the SL 1 Disaster. The rods are held by a magnetic linkage so if the reactor overheats the temperature exeeds the Curie point and the magnetic adhesion fails causing the rods to drop in by gravity. Likewise if the reator is ‘scrammed’ the magnetic field which is generated by electromagnets, is turned off so the rods drop back in immediately, which then leaves the problem of cooling the core without using excessively cold water. Essentially it is this system which failed at Three Mile Island due to incorrect installationof the pumps and the plumbing. There was a partial core meltdown but NO significant release of radiation. PWR designs have hardly changed in nearly fifty years and although well proven are not ideal, they produce small amounts of plutonium etc and do require complex safety and control systems. There are other better possibilities, one is steam moderation of the reaction, because unlike a gas cooled reactor using CO2 or Helium the steam acts a moderator and controlling its temperature and pressure not only allows output to be fine tuned but also allows it to burn a wide range of fuel. Such reactors hold out the prospect of making much better use of the fissile feedstock by burning up almost all of it so there is no significant disposal problem. And they are intrinsically safe, to shut down reaction and cool the ore it is only necessary to reduce the steam pressure and if needed bleed in inert gas to the steam circuit. This sort of technique possibly combined with a pebble bed system offers the prospet of relatively cheap, safe small scale reactors with no significant radioactive waste. The technology exists now which is not to say that it will not take ten to twenty years to go over to it on any significant scale: such plant has not been tested in commercial use. And only governments can order such plant. So don’t hold your breath. Which is why I personally do not think we should consider using PWRs in the UK on a large scale just yet, lets see what the new designs have to offer. Coal and gas can fill the gap perfectly well for the moment: not least because gas combined cycle plant is very cheap and quick to build if wanted. And as I saids in another post we already have the LNG terminals and there is plenty of LNG tanker capacity laid up. It is just a matter of agreeing long term contracts. And damn the CO2 Kindest Regards In 2007 from UK gov statisics: 5.2Mtonnes of oil saved Not much but a start. This is ACTUAL oil equivalent saved based on ACTUAL energy produced byy renewables. Source of data here Dennis George, Why don’t we just let the CO2 escape the tailpipe and provide fertilizer for the soybeans, and cotton. That way our people will be Warm, Well Fed, AND Well-clothed. When did it happen that alternative-energy calculations eschewed simple sums, because arithmetic doesn’t take into account moral, humanistic decisions? (A) .. Anything is better than we’ve got – ‘cos we is bad Is it really true that wish-science now trumps reality for the same reason? (A) .. Yup wish-science is about the power of the mind to imagine- much easier than to wade thro’ arcane and tricky science Have Climate-Censors really redefined arithmetical-axioms as meaningful only in carbon-contextual projections when underpinned by environmentally-‘sound’ principles? (A) .. Ask the (Wax)Mann Steve has got it spot on. There’s loads of energy – and accessible at that – but the illiterate witch-hunters keep chanting the ‘peer-reviewed’, ‘us is bad’ mantras!!! I would like to add another source of renewable energy is Methane gas. It can be harnessed anywhere from sewage system from your home, community, garbage site, or even hog farms. Check out an article on this. bill (16:38:03) :5.2Mtonnes of oil saved Yeah right. Does that include the oil burned in manufacture, transport and maintenance of the “renewable” generators? I’ll bet not and anyway is anyone still burning much oil in electricity generation in the UK? If not it doesn’t make much sense to quote oil equivalents saved. In any case I’d include hydro in traditional energy sources like coal, oil and nuclear. It works and produces energy relatively cheaply. The rest are toys. JamesG (16:28:58) : . ———————————– Nice knee-jerk dismissal, are you perhaps going to perhaps back that up with some concrete evidence that central energy planning has worked so far? What is stupid is pinning the production capacity and competiveness of Western Countries in the Global Community on a set of woefully inadequate technologies, none that are new, have ever been successful and without a radical reshaping of energy storage will ever be viable for anything other than photos for green media consumption. What is stupid is pouring billions of dollars into an industry to subsidize energy production, basically making an industry that would not exist except by the graces of the taxpayers, and one that does not contribute in any meaningful way to overall energy needs, a reliance that will only serve to burden the economy further as installed capacity grows. What is stupid is abandoning some very simply economic “common sense” truths in regards to markets and market forces, if you build a better product at a price that people will pay you will win. If you build a crappy product at a price people are not willing to pay you will lose. If you can argue that I would really like to hear it. The air car—they envision perpetual motion—so let’s pack it all up and go home folks!! Job done! No more need to drill, drill, drill in Alaska. LOL REPLY: Actually, your criticism is misplaced. This concept works pretty well. The only problem is air recharging stations, the PSI needed can’t be gotten froma tire pump at a gas station. – Anthony I’m pretty sure renewable energy is a good idea, but don’t take it from me, take it from this eight year old… this essay just shows how loony many of the readers here are. The arguments were very primitive and outdated. Even though you’re wrong, I could make a better argument against renewables if I felt like it. REPLY: then do it or shut the hell up, I don’t appreciate you calling WUWT readers “loony”. You claim you can do a better job of making the argument while insulting all the readers of this blog, so put your money where your mouth is. I’ll even give you a guest post status. I’m going to board a plane, so I won’t be able to be responding for at least 24 hours. Moderators, please make certain any replies are within blog policy. – Anthony E.M. Smith, Thank you for pointing out the “peak oil” fear mongering that the original post implied. Like this gem, .” I mean, come on, peak oil is so tired. Other than the obvious peak oil hype contained in the original post, I agree with the general sentiment of the article. Renewables are not a panacea. They aren’t free. And they may or may not be the immediate solution. But to completely dismiss them is simply demonstrating ignorance to the supporting role they can and will play in energy production in the next few years. Roger Sowell posted a link to the California Energy information website. Lots of interesting reading about how much electricity costs in CA. For example, here are some levelized costs of electricity produced by various merchant sources in 2007 in CA- Gas- 10 cents/kWhr Nuclear- 12 cents/kWhr Biomass- 5.5 – 13.5 cents/kWhr Geothermal- 5.5 cents/kWhr Hydro- 5.2 cents/kWhr Wave- 103 cents/kWhr Solar- concentrating PV- 42 cents/kWhr Solar thermal- parabolic trough- 27.7 cents/kWhr Solar PV, single axis steer- 70.5 cents/kWhr Solar- stirling dish concentrator- 52 cents/kWhr Wind (Class 5)- 8.4 cents/kWhr Summary- Solar (PV, thermal, stirling) is still very expensive, even in sunny CA. Geothermal looks pretty competitive in CA. Class 5 wind looks pretty competitive in CA, but Class 5 sites onshore in CA are pretty rare, according to: Don’t forget to add the cost of spinning reserves or storage to the wind cost. Coal is not even listed. Surprisingly, nuclear is cost competitive in CA. Who would have believed that! REPLY: Actually, your criticism is misplaced. This concept works pretty well. The only problem is air recharging stations, the PSI needed can’t be gotten froma tire pump at a gas station. – Anthony I was aiming at the perpetual motion notion–Gene No, it’s not law yet. It just passed a Senate committee vote. Then the full Senate has to vote on it. Also, the House of Representatives has to consider and vote on a similar bill, which presumably they will, as Madam Pelosi is keen on preventing ‘climate change’. Once the House and Senate have passed similar legislation, the bills have to go to a Conference Committee for reconciliation. After that, they go back to their respective bodies for another vote in each chamber. Then the final bill has to go to the President for signature. Only after all this does it become law. So there is still time to write your Senators and Representatives, and urge them to vote against this abomination. /Mr Lynn Clarification, and comments about the posting of A. Jones The accidents at both SL1, and Chernobyl were due to the fact that both reactors went supercritical or prompt critical(supercritcal or prompt critical is a matter of debate). When a reactor goes supercritical, or prompt critical the nuclear fission is not controlled, or stopped and the power level increases by orders of magnitude in a very short period of time (of the order of a second), until the reactor comes apart(SL1), or the active safety systems(neutron poison) are destroyed. This was not the case at the accident at TMI which was due to the loss of coolant to the reactor core. At TMI(a PWR) the reactor was tripped due to the loss of a heat sink. Tripping means that the control rods drop into the reactor core and or more boron is added to the coolant. The nuclear reaction(fissioning) is stopped. The core must still be cooled due to the fact of the decay heat from the fission products. What happened at TMI is that the PORV(Pilot or Pressure Operated Relief Valve) on top of the pressurizer(contols the pressure of the primary system) stuck in the open position. The opening of the pressurizer is automatic, i.e. when the pressure of the primary coolant increases above safety limits, the valve opens, and when the pressure returns to normal, the valve closes. In this case the valve did not close. There was only an indirect way for the operaters to know that the pressurizer valve was closed. This indirect way was flawed, and the reactor operators believed the system was going solid (during normal operation, the pressurized is half full of water, and half full of steam). A solid system could cause a large break in the primary system. As a consequence, the operators turned off the Emergency Core Cooling System, and after about an hour and a half into the accident, the core was slowly becoming uncovered. If the reactor operators, had left the system alone after the initial trip, there would not have been a TMI accident. SL1 was an Army reactor which was to be used for combined heat and power for polar regions. It was rated at about 3 megawatts. I believe it was graphite moderated. It had only one manually operated control rod, which controlled the nuclear reaction. According to the accident reconstruction scenarios, this single control rod was jerked out of the reactor for any number of reasons (suicide, control rod being stuck, and then unstuck, a other operators “goosing” the control rod technition, etc). In any event the reactor went supercritical, and it came apart. Two of the technicians were killed within the first half hour of the accident (one was impaled on the ceiling of the reactor vessel by the control rod), and the third lived for about 5 hours. Chernobyl was mostly a graphite moderated reactor, with very limited moderation from coolant water. Western based reactors are water moderated either light water, or heavy water (Candu). The reason for water moderation is that they are intrinsicaly(passively) safe with regards to nuclear fission, i.e. if the reactor overheats the nuclear reaction(fissioning) stops. But they are not intrinsically(passively) safe because of the heat generated from fission products within the reactor rods. Chernobyl had to be actively managed to prevent a runaway fissioning process. Apparently, at very low power levels, during the testing program, the reactivity went sufficiently positive, such that the reactor period was shortentened dramatically, and the reactor went super critical or prompt critical. This type of accident could not happen with western water cooled reactors. — You are arbitrarily assuming that there will a market for: hydrogen for fuel cells (very, very hard to store, expensive to compress, very expensive piping and tanks – and high energy demand to cryogenically store or high pressure store, can’t pump or send cross-country.) Reused Carbon/stored CO2 … Why? Both are a waste of chemical energy, steel, eqpt, pumps and piping and power for no value added. Vent the CO2 as fertilizer. Methane? Not needed now for cars. Don’t spend money and energy and material and manpower reating something not needed. For natural gas for power plants? More is available far cheaper from natural sources. Desalization is NOT needed in most areas (95-99%) of the world. Don’t do it if it is not needed. Spend the money on sewage plants, pumps, tanks and fresh water piping for the dirty – BUT FRESH – water already available. Generate power the most economical ways possible in each different location. Unlke the writer above who cursed free enterprise: Common sense market forces – not corrupted by governments and socialists – will create the profitable and correct solution. Corrupt governments, on the other hand, will only create more government. Oops! I got it backwards. This is an House bill that passed (‘H.R.’), so the Senate has to pass theirs, then Reconciliation, etc. Of course my Representative in Congress, the Hon. Ed Malarky, is working cheek by jowl with the execrable Sen. Waxman, so the two bills will be essentially the same. Still, there is time to make your voices heard. Use paper and an envelope, not just e-mail; it makes a larger impression. /Mr Lynn John Egan (06:26:05) :. Prices spike in the short term for a variety of non-resource reasons. Mostly it has to do with the fact that mine expansion is a ‘several year’ process and markets move in “this month” movements. Prices can NOT be used to judge resources extent. Ever. For anything. Sea water extraction is very real. Proven. It works. It costs a tiny bit more than land sources, so it is not done in quantity; yet. That does not say anything about the technical ability to do it, nor about the rate at which that extraction can be expanded. See: Where you will find: ” The total amount of uranium recovered by the experimental marine-equipment was 1kg in terms of yellow cake during a total submersion time of 240 days in the ocean. ” […] ” The total mass of the nonwoven fabric used in the experimental marine-equipment was 350 kg in the dry state and the size of equipment except for anchors was 8 x 8 x 30 m (length x width x height). ” I’d say that extracting a kq of yellowcake from a 350 kg blanket per year is sufficient proof that it can be done. We can clearly make plastics in sufficient quantity to extract several thousand tons of yellowcake equivalent, should we ever wish to do it. (The plastic being recyclable…) Recoverable U is about 1000 times the total known land quantity, which is itself far more than we could need in less than thousands of years. That we can mine yellowcake more cheaply in Australia DOES NOT make the Japanese sea water extraction unavailable! (Just not cheaper than dirt.) See: for a pretty good exposition on total U we can recover fairly cheaply. Notice that fairly often the comment is made that some source or other was shut down for economic reasons. That is what happens when price drops to $40 / lb. Then price spikes to $100+ and we “discover more”. Then price drops from the glut and “reserves fall”…. NONE of this has to do with quantity we can recover and use at “reasonable” costs (ultimate resource). That stays a gigantic number. ALL of it has to do with what is the lowest cost provider RIGHT NOW (economic reserves). Basically, it’s a description of market dynamics; NOT a description of usable resources. There is no energy shortage and there never will be. There is only a shortage of imagination and the will to use it. Anthony, I have to agree with “Just Want Truth…” though that the info-mercial that he linked to about the AirCar is way over the top and poor scientifically, particularly when it proposes the perpetual motion idea at the end…That’s just nuts! As you say, the idea itself, like the idea of using hydrogen, is not a bad one…and it could potentially give us much more flexibility in how we generate the energy to power our vehicles by serving as a convenient energy carrier, but to suggest it is some sort of free source of energy and that we could somehow generate the compressed air using the car itself is over-the-top and in violation of the laws of thermodynamics. If their air turbine could work on perpetual motion then all energy problems are solved. “…A car that runs on air and constantly refuels itself….. A no cost fill up ever. Not one iota of pollutants ever.” This turbine that turns car wheels could also turn a turbine to produce electricity—couldn’t it? Then that would be free electricity. So we could all pack up and head home. ;) I should have specified this is what my jest was aimed at. Unspecified jesting. My bad. GK says: There are two big fallacies in this. The first is a fallacy of omission: Yes, water vapor is produced when you burn H2 but it is also produced when you burn fossil fuels. The second is the fallacy of equating emissions to levels of the greenhouse gas in the atmosphere. The problem with CO2 is that it is long-lived in the atmosphere so that human emissions can significantly change its concentration in the atmosphere. By contrast, water vapor is not and in fact its concentration in the atmosphere is essentially controlled by the temperature. So, such emissions, at least on anything like the current scales of energy use, would not significantly affect the concentration of water vapor in the atmosphere. In fact, the way we can alter the level of water vapor in the atmosphere is by causing the temperature to change through changing the CO2 levels in the atmosphere, which does indeed increase the level of water vapor and causes additional warming (i.e., a positive feedback). Mr Lynn (19:09:25) : If it isn’t law yet then they shouldn’t have said : “This bill, when enacted into law this year… ” They should have said “if” not “when”, shouldn’t they? This is a quote. copy and paste, from the government web site, the link above : “This bill, when enacted into law this year, will break our dependence on foreign oil, make our nation the world leader in clean energy jobs and technology, and cut global warming pollution.” joletaxi (08:51:04) : I guess You will apologize iff I submit this in French, but I’m not enough fluent in English, in a matter that is not easy even in French. Si je suis d’accord sur les objections soulevées dans l’article, My rough summary of the original would be: “As I am in accord with the objections shown in the article”… And then there is a discussion of the problems of maintaining phase sync and power factors with so many distributed generators and even with photovoltaic inverters (that may not even have sin wave output). This can cause a variety of problems, even causing the grid to shut down as breakers trip. As load goes on /off wind turbines, it is even harder to keep them synchronized for frequency and with proper current / voltage phase relationships. There is a discussion of issues that Germany has seen and their dislike of the problem. It’s been a few years since I tackled a technical discussion in French, but I think I got the basic idea right. This will be my summary of the issues raised as I understand it. And it is true that with so many spinning generators keeping them all in frequency sync and phase lock and power factor balanced is one heck of a problem… You can end up with current being high right when voltage goes low and with nodal points where two currents from different sources (a bit out of phase in either frequency or in current vs voltage phase) can cause wires to blow, transformers to arc, and breakers to trip. BTW, keeping the reactance balanced so that voltage and current stay in phase is why we have those grey capacitor cans mounted on the power polls. Too much inductance, and you get voltage leading while current lags (inductive reactance) too much capacitance and you get a current surge while the voltage takes a while to come up. Resistance, like incandescent light bulbs, is neutral. Historically we had more motors than electronics and needed to add capacitance to the grid. With more electronics and the move to electronic ballast fluorescent bulbs, there is less of a need for capacitance. Keeping the grid balanced for this is much harder when you have a LOT of large rotating inductors (wind generators) popping on and off the system… add in a bunch of capacitance heavy inverters for solar and you jump back and forth from too much inductance to too much capacitance and that can cause lots of things to break… Joletaxi: Please forgive me if I did not get your point across. To the person from Denmark who commented that Dash For Wind stated that there were 54 days in Denmark where wind output was 5 and 10 and 15 and 20% of demand There were 3 days that did not meet 2% of demand. What is significant here were the 86 days or a little less than 1 in 4 days that did not meet 10% of demand. Of course this did not account for season differences because I just averaged out the demand. But there is not too much air conditioning in Denmark, right? But here in the US, the demand could be 50% higher in the summer than other seasons. I would think that the 3 days would easily turn into 20 if I could compare wind output to real daily demand, in the summer. And this does not account for hourly demand/wind output differences, which are greater. So it has to be ABSOLUTELY understood that we need 100% backup of wind, 100%. That is easy to do now while the traditional plants are in place. But I keep reading articles where traditional plants are being postponed on the back burner because of new wind plants. Wind can serve as a very expensive supplement to our grid, but never as an Alternative. It is Renewable only as far as the tax breaks, production tax credits, renewable obligations and whatever else they call them are in place. They would never be renewed without that, and they will never be competitive, because they will always be a duplication of traditional. Joel Shore (19:38:49) : “In fact, the way we can alter the level of water vapor in the atmosphere is by causing the temperature to change through changing the CO2 levels in the atmosphere, which does indeed increase the level of water vapor and causes additional warming (i.e., a positive feedback).” Wasn’t there a post here about measured water vapor at various levels in the atmosphere a few weeks ago? Didn’t it show some increase at low levels and a decrease at higher levels in the troposphere and essentially no overall trend? Why do you keep recycling garbage that isn’t borne out by real world measurements, Joel ? You are referring to assumptions made in modeling. I found this an interesting statement, given the context of this article. I’ve bolded the part that gave me pause. The “LFTR” they talk about is also know as the MSR. Liquid Fluoride Thorium Reactor and Molten Salt Reactor. From: “Thorium is a very abundant mineral in the earth’s crust. The LFTR has a liquid fluoride salt core instead of the usual solid core. The liquid-salt type of reactor was developed by Oak Ridge National Laboratory between 1950 and 1976. The LFTR would use thorium-232 rather than uranium as a basis of its fuel cycle.”[…] “The LFTR is 200 to 300 times more fuel efficient than standard reactors. Given the abundance of Thorium and the efficiency of the LFTR, the combination offers abundant energy as long as people will want a massive energy source.”[…]” …” the LFTRs will be between $1 and $2 per watt of generating capacity. The LFTR will be cheap enough to produce mid-load and peak power, And unlike the conventional reactors the LFTR can do dynamic load balancing for the grid. Why heck, the LFTR can even provide electrical backup for solar and wind, but why anyone would be so crazy as to install solar and wind generating facilities if they had LFTRs is beyond comprehension.” Everyone seems to like grinding their own axe… Me? I don’t care if its nuclear or wind, I just want reliable electricity at a modest (and preferably dropping) price. Oh well…? Just Want Truth… (19:38:50) : The quote is from Rep. Waxman. Looks like he’s counting his chickens before they hatch. David L. Hagen (11:55:21) : However, show me how you run your vehicle on TAR or COAL? Ok. See: The “more >>> Processes” tab on the right is interesting too… They’ve been doing this since sometime in the ’70s or so IIRC… South Africa runs on the stuff. It makes the exotic fuels of Gasoline and Diesel … The fuel needs to be extracted, then upgraded, then converted into syncrude – and then refined. Adding capacity costs $100,000 /bbl/day. To replace 100 million bbl/day will “only” cost $10 trillion. Um, not always… or even often… For tar sands, yeah, you extract it and crack it into a heavy crude so it can then go through a facility like VLO Valero has for heavy crude. No ‘syncrude’ step though. For coal there are ‘syncrude’ folks, but they don’t have an “extract, upgrade” step. I think you are mixing syncrude with tar sands processes… and leaving out FT based on synthesis gas. Oh, and the Sasol FT system starts with just “burn the coal with little air and added water” to make “synthesis gas” or a mix of CO and H2. Again, no extract, upgrade, syncrude, etc. steps. It is quite possible to make lots of tar sands, oil shale, coal syncrude, coal synthesis gas, etc. facilities in less than 5 years and for less than the trillions you speak of. It is being done today on a massive scale… in China. Of course, if you want to do it in the U.S.A. or Europe… well, then you are hosed with regulation, union labor, legal costs, and price hikes, until you give up. But that’s OK. SSL Sasol and SYMX Synthesis Energy have projects in China. We can just buy our gasoline and Diesel from China. No Problem… Oh, and Saudi is expanding capacity in refining oil, so we don’t really need any refineries. AND a giant new facility just opened in India aimed at the export market. See, no capacity or lead time issues at all. We can get all the gasoline and Diesel we ever want from India, Russia, Saudi Arabia, China, Venzuela; all our good friends. I’m sure they will give it to use for dollar bills and treasury bills. We are good and printing and we can / are printing Trillions of them. Making pretty pieces of paper is what we learn to do in our school arts programs and it’s clean and we can recycle them too… So what’s the problem? /sarcoff> .” Actually, Mr. Borgelt, you are correct that some lawyers argue that black is white, and some actually outright lie — it is their job. One example is a criminal defense attorney who knows his client is guilty but the client insists on pleading not guilty. Although some of my very good friend are criminal defense attorneys, my law practice is not in that category. If I ever lie then it very badly affects my business. Reputation is really all an attorney has to sell; so I guard mine very carefully. However, I am unintentionally wrong at times, as we all are. So, I invite you to point out with particularity where you believe I am wrong. I am always willing to improve my knowledge. I ask that you please bring verifiable facts to the discussion, with the source for confirmation. A mere opinion based on hearsay is not sufficient. This article was written in 2004, yet posted in 2009. What crap! Nice cut n’ paste routine you’ve got for pumping up your numbers, pal, and by the way, “In 2005, Denmark had installed wind capacity of 3,129 MW, which produced 23,810 TJ of energy. Wind power provided 18.2% of the total gross electricity production.[1] In 2006, the installed capacity increased to 3,136 MW.[10]” (WIKI) Also, this argument about wind’s intermittence being unreliable, what are you saying, wind might “go away”? Alternative energy is in addition to current oil/coal systems, of course there is a learning curve so to speak, in the development of these systems. Your title made me click: “Our Downfall”? That’s like posting, “Organic Food: The Death of Us All?” Get some vision, and stop pimping for Chevron. @Fluffy Clouds (Tim L) (21:05:43) : ?” Actually, fellows, I don’t practice in nuclear law and never have. Environmental regulations in general stem from the Clean Air Act and Clean Water Act, and the hundreds passed since then. Just so you know, have a look at the link below, and scroll down to Statutes and Regulations. These are laws passed by the government, for the regulation of nuclear systems, because in their sober and wise judgment, nuclear power from fission is too hazardous to be left to the discretion of the contractor, as I wrote earlier. Attorneys in this field generally are acting on behalf of various organizations to ensure those laws are complied with and not ignored. Based on actual lawsuits where evidence of non-compliance is produced, the attorneys are doing all of us a huge favor by bringing their lawsuits. The South Texas Nuclear Project is a good example. In regard to solar, it has its own issues, including the insignificant aspects of a requirement that the sun actually be out ;] In addition, the water requirements of solar are huge (cleaning), huge cost, and ineffecient cell technology. Dennis George (16:03:30) : A recombiner is set up and the hydrogen and carbon dioxide are recombined over nickel to produce methane which is then compressed and used as a motor fuel. You can do this, but you could just as easily make synthesis gas (CO+H2) and run it into a FT conversion and get gasoline and Diesel usable in our present cars – no compression needed…. Another alternative would be to use the methane (or methanol, that is somewhat easier to get) and put it through a hot zeolite catalyst (since you have all that heat…) and make gasoline out of it. Again, this avoids the “fleet change” issue… Oh, I should add to the Dennis George comment: Really like the approach, BTW! Something very similar was proposed by VW in the late ’70s in response to the Arab Oil Embargo. They wanted to use process heat from a HTGCR to convert coal to methanol and run that in cars. They had a projected cost of about 75 cents / gallon of gas equivalent IIRC. Call it about $2 / gallon of gasoline equiv. in present money as a first guess. I think they figured about 70% of the power in your fuel tank ends up coming from the nuke. I could look it up… The book was “Methanol and Other Ways Around the Gas Pump”. They used the coal directly without running it though a power plant, but the reality would likely be that the easiest path is a bit of both. A “partial burn” that gives power along with CO (instead of CO2) out, then add a bit more carbon and some water along with a lot of nuclear process heat to get CO + 2H2 that is reacted to CH3OH then you do the zeolite thing… But I digress… Bottom line is that this is a good idea and chemical process engineers do this kind of thing all the time. It isn’t hard, new, or particularly difficult. What’s hard is to get government and pressure groups out of the way to let industries make decisions based on engineering and economics rather than politics and fantasies. Stephen Brown (10:03:33) : The Law of Unintended Consequences, as applied to wind farms! So air safety is being compromised to accommidate this wind power boondoggle. I understand the Whitehouse wants to idle Loran, which is a cost effective backup for GPS. No responsible government would go strictly with satellites. I guess next they’ll ask us to replace the runway lights with compact fluorescents. Dave ATTN Douglas Taylor. It seems you know your business sir. I hope my post did not mislead anyone, I merely wished to make two points, that nuclear fission holds much promise and that we do not need it now: we have perfectly good fossil fuels. And also to correct some rather odd ideas about fission and fusion power that float about here. To further clarify what you and I said. Any PWR tends to be very sensitive to load variation at low powers. The RMBK reactor at Cherbonyl is largely graphite moderated and a plumbers nightmare. Because it also uses water moderation it is horribly unstable at low powers. This was well known to the Russians and they had various safety devices to curb the instability. Why then the operators disconnected these safety devices and ran a test which their manual forbade them to do is a mystery. The rest is history. By contrast as you correctly say the problem at Three Mile Island was that the top of the core came out of water: not least because there was no remotely controlled top vent valve. In truth the operators did what they were trained to do, they did not know that the pumps and plumbing has been wrongly installed, nor had they either sensors to tell them what was wrong or even if they had they had no controls to rectify the problem. It is easy to blame them: but the proof of the pudding is in the eating and even if the reactor was wrecked there was no significant release of radiation. That is exactly what defence in depth means, every system may fail one after another until the last ditch but if that holds all is well. Still the press likes the frisson of saying if the last ditch did not hold think of the catastrophe. But it did hold: and there was no catastrophe: except in the press of course. No the point I was trying to make is simple. It is not that we do not have a practical and safe nuclear fission technology but that there are much better ones on the horizon. And that we do not need to choose now, we have plenty of fossil fuels to be going on with. Kindest Regards. Mike, I think that remark was not appropriate. Please remember that Roger practices law in California. Given the pressures in this state against nuclear his comments about the costs to build a reactor are fairly accurate. He must recognize that reality. From all that I have seen, Roger has an engineers truth first. Lawyering comes second. Yes, he is set against fission, but again, given the context of California, that is appropriate. Can you imagine the costs involved in getting approval to put a reactor on top of an earthquake fault near the coast in California? It would be impossible. BTW, there is nowhere in California that is not near a fault line… Heck, you can’t even buy gas cans in California unless they have a ‘special’ low vapor nozzle. And every single paint, solvent, you name it must be ‘special’ California approved… (My mechanic can no longer get the spray that does clean parts that contains MethylMethylKetone or propan2-one he can only get the stuff that does not work with acetone in it.) So when Roger says it’s just too expensive, realize that it’s darned near too expensive just to rebuild a transmission here due to state laws. Imagine what that would do to a nuclear project… Solvents that don’t clean. Paints that don’t cover or hold up. All of it “special” and all of it needing to be passed through a special nuclear approval process… And don’t even think about trying to make anything out of lead in California. The local shooting range (outdoor) does not let you use lead shot for skeet anymore, since the state is trying to shut them down; and the local indoor range shut down since they could not meet the exhaust air filtration requirements (essentially perfection…). So expect to be making your nuclear facility with as close to zero lead as possible. And the state wonders why anyone painting parts is leaving the state, and anyone doing soldering, and anyone doing casting, and anyone handling petroleum waste products (i.e. drain oil), and anyone working with lead, and anyone involved with any heavy metals, and anyone who needs working solvents and degreasers, and dry cleaners, and even bakers… (Yes, the local bread bakery – major industrial scale – got hit with a pollution notice for their “hydrocarbon pollution” … the ethanol the yeast made in raising the dough baking out of the bread). Yes, the smell of baking bread is officially an Evil Toxic Pollutant !!! Beginning to get the picture? … Make a nuke with no toxic material and no air pollution. Not even from paint drying. And don’t dare think of baking bread… So China can build a nuke fast and effectively, California can not. Roger is in California… it’s not about him, IMHO… David Wells: .” David, you’re a scary, scary man. It’s fascinating to me to think of where this mindset arises from. I’m astounded, to no end, how it thrives, and how much influence it has these days. I request of you (surely to be denied) that your diatribe be saved. Print it, tuck it away in your archives, and go back and read it 20 years hence. You may astound yourself with just how far down, and how warped you were back in the day. Good day. k Antenna Wilde (21:50:38) : Antenna Wilde, You were supposed to say Exxon not Chevron. Exxon has always been the petroleum company name of choice for attack comments. I’ve never heard of ChevronSecrets, etc. I visited your blog. You are from San Francisco? I see your blogs is, for the most part, about left wing politics. And it is noticeably vulgar. But one thing I didn’t see in it—science. Antenna Wilde (21:50:38) : Antenna Wilde, Why do those on your side of this issue always go to Wikipedia, grab a factoid, and then come come back to the debate and post the factoid like it is Gospel and will end all argument? Who really trusts Wiki that much? Global warming entries in Wikipedia have been altered by Green radicals, the leader being William Connolley. Wiki is not an encyclopedia. It’s global warming entries are all dubious—having gone trough literally 10,000’s of edit by people who are not unbiased. To use Wiki as a source gives an indication of where you are coming from. Ref : & Fluffy Clouds (Tim L) (21:05:43) : This is correct! sowell IS THE REASON for the problem with litigation !!!! No, he is not. Our legislative idiots make the laws. The lawyers are just as often simply trying to keep you from being ground up by them. I won’t go into what led me to appreciate the ‘good lawyers’… Let’s just say that the proper quote from the Bard is more of “First kill all of their lawyers”… i.e. disarm your opponent by keep your hired guns available! So our legislature makes idiotic laws. Roger tries to keep his clients from the meat grinder. That does not mean he made the grinder! Now, since most of the political morons who made this mess have law degrees, you would be correct to say that generically “lawyers” made the litigation problem; but it is wrong to personalize it to Roger. I’ve read his site. He like to design things and he wants to have things built… That is not a litigation happy obstructionist. It’s a frustrated engineer getting a law degree to keep from getting back a stump when he encounters The Legal Machine… From Wikipedia, Sellafield’s biggest decommissioning challenges relate to the leftovers of the early nuclear research and nuclear weapons programmes. Most of the rest of the decommissioning costs are from the world’s first commercial nuclear power station at the site. It’s deceptive to cite Sellafield’s decommisiong costs as an example of what it costs to decommision nuclear power stations. The costs of more recent designs are much lower and of current designs lower still. And as for disposing of nuclear waste. It can be buried in deep disused mines and sealed away, effectively for ever. The only real issue is the Green fueled NIMBY objections, which are off the irrationality scale. Otherwise good article and many interesting comments. Roger Sowell: Well this couple of drive-by shootings without any backup make the point for a start: “Based on actual lawsuits where evidence of non-compliance is produced, the attorneys are doing all of us a huge favor by bringing their lawsuits. The South Texas Nuclear Project is a good example.” .” Followed by this: .” Evidently you are work for the rent seekers who aim to rip us all off with their tax subsidized renewable energy scams. Should be a nice little earner for you. If you were any sort of engineer you would realize that engineering solutions are about economics not mere practicality. I think it is a Mark Twain quote “an engineer is someone who can do for 50 cents what any damn fool can do for a dollar”. Sure a lot of money has been spent on nuclear research. Much of it on weapons and production thereof. These are now sunk costs. Future nuclear power research can be funded from profits made from generating and selling power just as all sorts of energy companies fund research. Nobody objects to sensible engineering reviews of nuclear power plants. What isn’t needed is a regulatory process which allows every bunch of lunatic Luddites and their lawyer hangers-on on the planet to delay projects and increase their costs until they become uneconomic and fail. Wind farms ‘could pose danger to planes without new air traffic control radar system’ The development of new, larger wind farms could pose a danger to planes unless radar systems can be adapted to deal safely with them, air traffic controllers have warned. Wind farms can create distortion on radar screens and as the number of farms has increased, so has the number of radar “blackout zones”, meaning that aircraft passing through the area can effectively disappear with air traffic controllers losing their exact position This is the moment when we must come together to save this planet. Let us resolve that we will not leave our children a world where the oceans rise and famine spreads and terrible storms devastate our lands. – Barack Obama Where did mr. Dernoga go? I want to hear his earthshaking arguments. tl;dr! “When” in this context is a construction we might term the ‘optimistic conditional’. For a bill in the US Congress to become a law, it must be passed by both the Senate and the House of Representatives, and then signed by the President. This awful bill can still be stopped, if enough Senators and Representatives refuse to vote for it. There are some Democrats from fossil-fuel-producing states who are not wildly enthusiastic about cap-and-trade. So there is yet hope. A flood of objections from constituents can make a huge difference. /Mr Lynn Those discussing French nuclear energy might also take into account the fact that the UK now imports 14% of its electricity from France >>There is no comparison between the dangers of nuclear >>proliferation and natural gas explosions. Coal fired plants >>work just fine, particularly since they were cleaned up in >>the 1960s.?? DaveF (05:14:57) : GWh 2003 2004 2005 2006 2007 Total supply (as given in Tables 5.1 and 5.2) 400,369 401,357 406,633r 406,341r 401,671 less imports of electricity -5,119 -9,784 -11,160 -10,282 -8,613 plus exports of electricity +2,959 +2,294 +2,839 +2,765 +3,398 less electricity used in pumped storage -3,546 -3,497 -3,707 -4,918 -5,071 less electricity used on works -18,136 -17,030 -17,871r -19,210r -18,087 equals Electricity supplied (net) 376,527 373,340 376,734r 374,693r 373,298 Which if my maths is ok (8,613 – 3,398) / 401,671 = 1.3% imported Also renewable data:? perhaps some legislation that increases the cost of products is worthwhile? Some points from my side. The first problem you describe (big blackouts) is a grid problem not a gernation problem. If the US had an up to date grid the blackout wouldn’t be that big. But due to greed and profit maximization, nobody wants to invest in a system as long as it makes money. Beside that, thats why I’m against the idea of releasing critical systems to the market. You can eleminate the problem of the big grid, by decentralized generation. Instead of 100 big plants across the US, you can set up 10000 smaler one, that are well distributed. If a part of the grid fails, the rest remains untouched. A other point you mentioned are subsidies. States have put in much more subsidies in nuklear power than they have done in green energy. In Europe, especialy in Germany where I come from, subsidies for coal are also a big problem. Subsidies are not a green energy thing. Safety of nuklear plants is another point. Beside the all known Tschernobyl, there is another thing. First see the list of critical situation in NPPs. You will find them (with sources) at some of your not liked green sites. Second, there is some evidence that the rate of child cancer in the near of NPP is significant higher than normal. Would you like to life next a NPP? Than you say, the raw material for NPPs is there for thousends of years. I heared some other voices, that say that this calculation is based on the current number of plants and even than it would be very optimistic. I heared that the reserve only holds for 80-100 years. Wouldn’t be that sustainable. Last one – Incontinuos power production: 1. Weather reports are able to predict windspead days before. We are not talking about a single turbine, but a network of turbines. If the wind is overall to low, there is enough time to power up one of the remaining big plants. BUT to du so, those plants must be agile enough. NPP are not agile. Gas plants are. So Gas plants are the perfect supplement to renewable plants. 2. If the demand is there, solutions for storing energy will be found. As long as nobody needs to store big amounts of power, nobody will come up with a solution (for a not existing problem). If electric cars will realy come, they are maybe a solution for storing a lot of energy. Next to renewable energy (solar energy+heat, wind, thermal (geo and air), tide, bio gas etc.) there is the possibility to raise efficency. I.e. use process heat of big plants or factorys to heat or cool. Sure, nuklear power is cheap. But if the power company had to pay a real price for the waste and to garanty for any accident with there own money (and not with the money of the state and so the people), the price would be higher. Steven Goddard Your ability to avoid the obvious is truly remarkable. What you have not [snip] is that irrespective of nuclear generation, nuclear weapons will exist anyway. On a technical basis your argument is simply wrong. Clearly you have no understanding of the technology required to extract plutonium in fuel reprocessing or you would not be blithely assuming that terrorists will have access to it. >>The air car — they envision perpetual motion… It is not perpetual motion and it is not free either. I do so hate these Greens when they say energy is free – IT IS NOT FREE. Wind energy is actually quite expensive, due to high infrastructure costs. And this air-car needs a b***** great compressor to compress the air and give it energy. That compressor will run off electricity, which runs off coal, which pollutes the local countryside, and also costs a lot of money. And the more times you change energy type and store it, the less efficient the use of that energy and the more expensive the vehicle. That is why it is still cheaper (and better for the environment) to run a diesel vehicle than an electric vehicle (and much cheaper again than a hydrogen vehicle). >>If you burn H2 in cars, the exhaust output is not CO2 as >>in regular fuels, but H20 – water. But let’s not mention the huge amounts of CO2 and H2SO4 that were produced by the power station that produced the electricity that made the H2 in the first place. Let’s include the whole cycle, shall we (and a remarkably inefficient cycle it is too…). . >>This article was written in 2004, yet posted in 2009. What crap! >>Nice cut n’ paste routine you’ve got for pumping up your >>numbers, pal, This article WAS written in 2004, pal, and I have several colleagues who can verify that. Indeed, most of the UK media editors should be able to as well, as I sent it to enough of them. (But you will have to elucidate on ‘pumping numbers’, pal, we speak English here.) . >>Also, this argument about wind’s intermittence being >>unreliable, what are you saying, wind might “go away”? As it does many times a week, pal. I flew over 2,000 turbines a couple of days ago, and not one was turning. Strange as it may seem, but the wind had ‘gone away’, pal. On costal sites, the wind ‘goes away’ like clockwork, twice a day (check out ‘diurnal effects of wind’, pal). . >>That’s like posting, “Organic Food: The Death of Us All?” >>Get some vision, and stop pimping for Chevron. Sadly, you could be right there. I would guess that if we all indulged in organic farming, some 1.5 billion people would die of starvation. Seriously, organic crop yields are simply not great enough to feed the already overcrowded world we have, let alone the extra population that will arrive in the future. Organic farming is an indulgent, luxury product designed to massage the egos of the rich – who don’t give a damn where the rest of the world will get its food from. And if you read my article properly, you would see that I was promoting nuclear power, not oil. The sooner we stop burning petrochemical feedstocks the better, in my opinion. ralph ellis (06:09:37) :?? Nothing is 100% safe. If nuclear were safe: 1. there would be no accidents. 2. there would be no need for all that expensive scondary containment. 3. there would be no back-up systems. If 1 can fail then so can 2. In siberia there are a few 10,000 people close enough to catch the nuclear fallout. And evacuating them to another location is “simple”. In the UK tthings are a bit more crowded and you would have to evacuate millions ad a large % of the land would be uninhabitable. eg just one Chernobyl sized problem would do this. No-one say it will never happen. But there is 1 chance in 1 million years of it occurring. This does not mean no accidents for 1 million years. It means there might be one tomorrow but then there is unlikely to be one for a million years. ONE accident is UNACCEPTABLE in the UK at any time that carbon based lifeforms inhabit this island. >>Global warming entries in Wikipedia have been altered >>by Green radicals, the leader being William Connolley. >>Wiki is not an encyclopedia. Too true. I tried to add details (and problems) of wind intermittency on Wiki, but they were all systematically deleted. Eventually, the entire ‘wind’ section was ‘sealed’ due to ‘vandalism’. So adding the truth to Wiki entries is classed as vandalism. A really reliable resource, obviously. Roger Sowell (13:09:25) :. Pity you could not append a link to your “greatest cons of all time”. They do live in an electricity powered civilization from all I know, so it matters not if it is subsidized energy if they can afford it . >>ONE (nuclear) accident is UNACCEPTABLE in the UK at any time >>that carbon based lifeforms inhabit this island. If you have that philosophy, you may as well never leave your bed. Vehicles in the UK kill 3,000 people a year. Are they banned? Could we do without vehicles? If we are to survive and prosper as a nation and a species, we need to balance gain and risk, and clearly the benefits of nuclear power outweigh the risks. Besides, what is wind power derived from? Nuclear energy! Now you might think the Sun to be warm and friendly, but if it ever decided to throw out one of those CMEs in our direction, we might all fry. Are we to ban the Sun too? . There’s a nice graphic on Andy Revkins dot earth blog of the total energy research budgets of the member governments of the International Energy Agency. It displays the subsidy argument in its true perspective. Nuclear has received and still receives much more than it’s fair share. Even fossil fuels – ie established technology – still receive more than the puny amounts allocated to solar, wind and geothermal. It’s no wonder there are so many advocates for nuclear energy and so much basic disinformation abounding – there are a lot of fingers in the pie. And those costs do not even account for decommissioning – it’s just research. It is a well known fact that the absurdity of the nuclear authorities in France and the UK being responsible for renewable research too meant they had free rein to overstate the costs of wind, solar and wave energy while burying the costs of nuclear by similar creative accounting. This nuclear funding bias was initially exposed in the UK during the “Salter nodding duck” case and again when the UK reactors were privatised because nobody actually wanted to buy them after seeing the true costs of nuclear. The only reason the French want them now is because the decommissioning and most of the new-build costs are underwritten by the UK government. So the British taxpayers will be subsidising French industry and the French will finally get some return on their nuclear investments. Stop the nuclear greenwashing you lot! Yes, nuclear has it’s place but it still has lot’s of problems and those new designs are still largely untested so any claims made for them are mere speculation. We old hands in the energy business know how nuclear speculation goes – always wildly inaccurate. But it’s not a case of either/or and we needn’t argue that it is. It is quite simply sensible to spend the money on renewable research just as it is sensible to spend money on thorium reactors and on nuclear fusion. But meantime, as far as I can see, natural gas and coal gasification has all the others beaten hands down. Someday we may be able to use wind or solar power to generate hydrogen (or other fuel) economically and thus store energy for when it’s needed. When that someday arrives, that’s when we should start building wind farms and solar generating stations. Until then, we need to keep up the R & D in various energy solutions while building new, conventional power plants to meet our energy needs. When we build new, conventional plants, we can actually decommission older, dirtier, inefficient power plants. Without new, conventional plants, we will have to keep those old, dirty plants in operation. @ Antenna Wilde (21:50:38) : Are you in the employ of T. Boone Pickens? How much do you have invested in GE? Who pays you to post these things? Whenever somebody states or implies that people who don’t agree with their visions of the green future are in the pay of big oil, I know they are not serious people and should be ignored. Mike Borgelt: That was from radiosonde data that has known severe issues with it. By contrast the data from satellites shows that the upper troposphere is moistening as expected. See, for example, here:;310/5749/841 (There are also other studies by Dessler and co-authors.) The satellite data is much more believable because as Fig 2 of the Soden et al paper demonstrates, the assumption that such a feedback is occurring results in much better agreement not only with the overall long-term trends in the data but also with the fluctuations on roughly yearly timescales. Reprints of Dessler’s papers on the water vapor feedback that I referred to are available here:;310/5749/841 with the 2009 Science paper providing a good overview. ralph ellis (07:30:19) : Vehicles in the UK kill 3,000 people a year. Are they banned? Could we do without vehicles? A car crash affects physically only those involved. There is no long lasting physical affect to bystanders and land. A plane crash on London would wipe out more “people” but at least those that survive can pick up their lives in the same location afterwards and would not be subject to genetic diseases. An accident at oldbury npp would kill as many AND effectively kill the land for centuries to come, and leave others to suffer the affects of genetic damage. Now you might think the Sun to be warm and friendly, but if it ever decided to throw out one of those CMEs in our direction, we might all fry. Oh come on! Ridiculous! But then don’t you already protect your eyes and skin against its harmful effects? Not so friendly then. JamesG, Nuclear power is the only energy that the government should have anything to do with because no other form of power generation holds as much potential to destroy. I do not believe that even dyed in the woool libertarians such as myself want every individual on the face of the Earth to get hold of enriched uranium or nuclear waste. Wind, coal, geo-thermal, solar or what-else have you does not have anywhere never the potential to be so destructive as atomic energy. This does not mean that these energy alternatives are better, just that they do not require an atomic physicist to over see their operations. Also you have a huge amount of capital that must go into the construction of an atomic energy plant where most of the costs are up front and construction delays greatly increase the costs due to interest expense. The government should not be spending a dime subsidizing anything except nuclear energy. Also you are a liar about the amount of subsidies that nuclear energy actually recieves because you are including the insurance trust that the nuclear power generators pay for without discounting their payment into the system. Nuclear could be cheaper than coal if the plant design was more standardized and the government would cover interest expenses during the construction phase reducing the need to issue bonds to cover interest expense. This would ultimately benefit everyone concerned because it would provide good high paying jobs, make nuclear power more attractive to Wall Street while reducing cost to ratepayers. Nuclear is not the perfect solution, but it is a great way to provide a base load at the lowest possible cost with the least environmental affect. As I understand it, the most serious problem with nuclear energy is that it is not very good modulating its load to scale with demand. I do not see any alternative energy source being able to produce energy on demand at all, so perhaps the best thing to do would be to over build nuclear and find a use for its extra wattage during off-peak hours or retain coal fired power stations to modulate the load. With all the brilliant engineers that we have it should not be that great of a challenge if we discover the political will to live an abundant life. AnonyMoose (08:01:46) :. Another problem is that as soon as government picks the winners and losers, then real innovation is hampered, if not eliminated altogether. oxdriver (08:47:52) : Congress signed into law that biomass from public lands can not be used to create alternative energies (Biofuel Incentives and the Energy Title of the 2007 Farm Bill). Why? It was against the interests of some group to permit. My guess? Farm lobby. After all, if you are producing alternatives that work, ethanol from corn would not look so attractive. I could be wrong on that. But someone didn’t want you to do what you want to do. It does make a lot of sense, another strike against it. “Bill” – I’m afraid you can’t escape nuclear energy as easily as all that. The French have lots of nuclear power stations and apart from the fact that we in the UK rely upon the French for 14% of our electricity (Sorry if I’m boring anyone by repeating that) but some of those stations are in Northern France ( on the Cherbourg peninsula, for example) as little as sixty miles from our southern coast. No-one is going to persuade the French to abandon nuclear power within the foreseeable future, so surely the best way to prevent accidents is for atomic scientists the world over to work together to promote best practice, which I believe they do, to a large extent, not shut our eyes and hope it will go away. Joel, I think the evidence is that Dessler and his pals simply redefined the term ‘feedback’ to include humidity. The only feedback measure that is valid is temeprature. And the temps are not moving as predicted. jamesG, Are you suggesting that nukes in France are paid for by the UK? At least two commenters have brought up the issue of radioactive waste. As a health physicist, I can address the concerns. Is radioactive waste necessarily more hazardous than other waste, particularly chemical wastes? Not really. Some points to consider… 1. High level radioactive waste, as fuel elements, is insoluble. If you put it in a highly secure, underground disposal facility which is in a low water area, is the waste going anywhere? No. It will sit there indefinitely and pose no risk. Given that it still is an energy source, my guess is that future generations will dig it back up and re-use it. 2. For the sake of argument, let us presume the waste somehow should start to disperse to the environment. Would it disperse as chunkies or would it move by molecular dispersion (one or two atoms at a time)? It would be the latter. By the time the atoms migrate to anything, e.g. a water well, there would only be long lived nuclides and damn few atoms. When it comes to radioactive atoms, dispersion is a solution. Why? Well, what most people don’t realize is that a radioactive atom only emits its radiation once. Is that atom emitting radiation all the time? Absolutely not! It has a 50-50 chance that it will emit its radiation sometime in its half-life period. So if an atom has a 24,000 year half life, it sits around doing nothing (no emission) until it lets loose its radiation. With a long half-life like this, the odds are it will NOT emit its radiation in your lifetime if that atom gets into you. Every day of our lives, we eat potassium40, carbon-14, uranium, radium, thorium, and a host of other naturally occurring radionuclides. A few atoms of reactor waste wouldn’t be a gnat in the ocean of radiation that we normally receive. 3. Compared to other waste streams from energy sources, reactor waste is very small in quantity. It takes very little disposal/storage space to handle all of the waste. 4. Radioactive materials are so much easier to find and deal with than chemical wastes. How do you find the waste chemicals from the manufacturing of solar panels should they get loose in the environment? It’s not as easy. Many of the chemicals are soluble and move easily in the environment. Managing radioactive waste is not rocket science. It is pretty easy to do. The issue of radioactive waste is a red herring. Radioactive waste disposal can be very, very safe both for the environment and mankind. I find it fascinating that so many want to make judgment on radioactive materials (principally reactors), yet less than 10% of the population can correctly answer this most basic, elementary question on how a radioactive atom works: how often does a radioactive atom emit radiation? When one understands the simple concepts that answer this question, understand what half-life means, and then apply this basic knowledge to radioactive waste disposal, suddenly the fear evaporates. Knowledge is power folks. Learn the answer to this question and you will be empowered. anna v, There was a similar discussion on WUWT’s piece “Now That’s A Commencement Speech.” My comment at (15:02:15) covered much of this. bill (04:28:49) : To satisfy the French will require (12.4/1.5)*10^6 *250 /1000 tonnes polymer which is (if I haven’t slipped on the decimal) a rather unbelievable 2.1*10^6 tonnes of polymer. Which according to this randomly selected site: would be about 1% of world polymer production. Not exactly a lot. Since it can be reused many times, the quantity per year after production began would be smaller. What’s so hard to believe about 2 million tons? I think you need a bit of perspective. This machine is used to dig coal: It moves 100,000 cubic yards in a day. At a typical density of dirt (about 120 lbs / cu ft) that would be about 27 x 120 / 2000 or 1.62 tons per cu yard for a total of 162,000 short tons PER DAY. So in two work weeks (take Sunday off) this machine moves about 2 MILLION tons of stuff to mine coal. I’d trade 2 weeks of work for the entire nuclear derived power output of all of France. The polymer will therfore have to be sunk into a deep water current and all 2 million tonnes dragged up from many meters down. Yes, they used the Japan Current in their test. Nature conveniently provides the energy to move all the water. That 2 M tons is about 5000 to 6000 tons per day. Since it sits for most of a year, you don’t need to put it all down on one day and all back up on another. It would be better to have a continuous production. Oh, and since you seem a bit out of touch with the scale of industrial processes: 6,000 tons is just about NOTHING on a mining or industrial scale. Not even a gnat on an elephants anatomy. About 250 tons / hour. It would require running the above mining machine equivalent for about 0.037 of a day each day. Or about 54 minutes. Unless I slipped a decimal… I guess we’re going to need a smaller machine… about 1/24 that size ought to do it… Hmmm! Yes, Hmmm indeed… Just Want Truth… (15:29:10) : No, it just left the committee. It must now be voted on by the entire House. Then, it must pass the Senate. Then, the President signs it into law. H.R. 2454 is not the law of the land yet. It is not so much that radioactive waste is particularly hazardous but that the danger is very small compared to naturally occurring sources. The ecopropagandists love to say that Irish Sea is the most radioactive in the world which may be true: but they never mention that the source of all that radioactivity is natural. Similarly the county of Cornwall, England, not only has problems with arsenic in the soil but lots of radioactivity too, sixty years ago a number of water wells were shut down after this hazard had been discovered. Kindest Regards bill (06:26:34) : ? China, sadly, is not influenced by California law. Our paint has been lead free for decades, it’s the newer ‘only water based’ part that makes it not work well for many purposes. And as near as I can tell, propan2-one has never been implicated in ozone, having no chlorine: So I don’t see how your snipe connects to my comment at all. Perhaps a bit more time in chemistry class would help… perhaps some legislation that increases the cost of products is worthwhile? No need. We already have a 10% sales tax making everything so expensive that folks are forming shopping convoys to Nevada… Personally, I like Oregon more. About $100 gas for the trip, so I hit break even on a $1000 item (unless it’s electronics that have a special green tax…) but it really depends on the ski season. Since I’m up the hill anyway, it’s a great time to pick up all the products that are not available in California. It’s been a while since I did the Oregon run. Last time was $1300 of truck tires for my old Ford F350 4×4 – but then I needed to fill the back with “other goods” to make the fuel cost back… It got about 9 miles to the gallon… Took about half the bed loaded up for 2 neighbors… Isn’t it great what they have done to reduce smog production? /sarcoff> Come to think of it, I can’t name a single product other than agriculture that is still made in California. Even name plate products like Apple iPods are outsourced to other places. No, a ‘product tax’ won’t work since we “don’t DO products” here… I’m sure somebody makes stuff, but it’s not the huge manufacturing center it once was. (I drive past the empty factories frequently). FMC shut down some time ago, they made the Bradley tank-ette. Aerospace has largely bailed. Computers are toast. Some electronic parts remain, but mostly samples, the actual fab having moved to overseas long ago. Ditto much of the communications gear fab. A bit of software remains (though the actual production and pack of media is gone). Heck, even movies are now managed out of hollywood but actual filming is “on location” somewhere else much of the time. (Though the back lots remain and are sometimes used along with the sound stages.) I guess that’s what we still manufacture, fantasies… You will have to put your tax on fantasies. Don’t tell Hansen and Gore … ralph ellis (06:09:37) : A few years ago I saw a bumper sticker saying “More people have died in the back seat of Ted Kennedy’s car than Three Mile Island”. John Galt (10:17:33) : Another problem is that as soon as government picks the winners and losers, then real innovation is hampered, if not eliminated altogether. You are being too gentle. The true state of affairs is that “As soon as the government states an intent to pick the winners and losers”… Just stating that they are going to stir the pot kills innovation and shifts all the money from R&D into PACS. >>There’s a nice graphic total energy research budgets >>Nuclear has received and still receives much more than >> it’s fair share. Yes, but that includes nuclear physics budgets – answering the fundamental principles of physics. CERN is science, not nuclear power. And there are not many fundamental principles that can be discovered by looking at the internals of a windelec (wind turbine). . >>Wind power is nuclear >>Oh come on! Ridiculous! Not ridiculous at all. What you are saying, by the promotion of wind power, is that you are perfectly happy with nuclear power (the Sun) as long as you think that this nuclear power source is sufficiently safe. All we need to do, therefore, is emulate the same safety standards as our nearby nuclear reactor (the Sun). “Yes, nuclear has it’s place but it still has lot’s of problems and those new designs are still largely untested so any claims made for them are mere speculation.” Whereas the the wind turbines have been tested and all their claims have already been proved to be speculation. We now get to watch appalling winters render most turbines uneconomic to repair within 5 years. Roger Sowell wrote: The word power does refer to electricity. As to the renewable figures you stated, you are quite wrong, sir. The figure is 23 percent, please see the link I referenced. Hydro is right at 10 percent, which is less than half of 23. No Roger, I was not wrong. I was showing the percent of Renewables as a percent of Power in California, not just Electricity. If you want to limit discussion to just electricity then don’t use the term POWER. Then you use percent increase of Wind relative to itself to claim that: The wind generation increased more than 33 percent since 2001, Yeah when you don’t produce much to begin with its not hard to get a 33% percent increase over 8 years. The fact is the last year the increase in Wind was but 0.2 % of total electrical generation. Or if you prefer its gone up by 1 % of generation over the last 5 years. Solar is flat and the same percent in 08 as it was in 04. Large Hydro appears to be really down.. Arthur Fascinating read. It’s actually quite similar to the decision a homeowner has to make on a micro scale when deciding whether to tie to the grid of go off the grid. When a battery is full it is full. See for example To Grid-tie or Not to Grid-tie. I also agree that we can’t ignore nuclear energy. It will probably have to be part of the mix. I can’t see the moon going away – so I think we are fairly safe relying on tidal sources as a big chunk. Wind and even solar iffier and more dependent on weather conditions. This was very forward-thinking in 2004 and it reads, to me, like it was written yesterday. Andy Greene Green Living Tips for Rednecks Underlying this essay is a key issue: Ralph has picked the least power dense (wind) and most problematic (hydro / nuclear) energy sources to critique. He hasn’t included PV, solar-thermal, and geothermal which may have significantly more potential. That said, his analysis of the issues with nuclear in particular and well thought out. nuclearinfo.net claims that current known reserves of uranium will last for 85 years at current rates of consumption. Now, take into account that current designs only use about 5% of the energy in the fuel and “deep-burn” designs can get that up to 98% (with corresponding reduction of the radioactive waste issue), and that 85 years goes up to over 1,600 years. Without “additional or speculative reserves” (which nuclearinfo.net says could kick up the duration to 500 years, which translates to 98 centuries with “deep-burn”). Oh yes, that is counting only on the U235 in the ore, combine breeder and deep-burn and the supply goes up by a couple of magnitudes. Then there is thorium, which is three times as abundant as uranium in Earth’s crust. Solar has the following problems: 1. variability – partly highly predictable (day/night, seasonal), part much less so (clouds, dust, smog, anything that attenuates the sunlight). 2. Siting – insolation as well as cloudiness has to be taken into account. England gets less sunlight than anywhere in the contiguous 48 states. 3. diffuse – which brings up the shear area needed for the collectors/concentrators. Take the effective insolation at your site, divide by the efficiency of your system, and figure out how many square kilometers you need. 4. water – the collectors/concentrators heed to be kept clear of dust, etc. and washing off the surface takes water. Possibly a lot of water. Definitely a problem in a desert, where most of the good sites are. Whew, where to start. I have spent my whole career in alternative electricity generation and conservation. I started at Pacific Gas & Electric 29 years ago using ratepayer dollars to help agricultural customers reduce purchases of our product. I was skeptical of the “sustainability” of this program at the time and soon switched over to energy production. I built small cogeneration plants for the next 10 years and then landfill gas generating plants ever since. The article is amazingly durable considering it was written five years ago. Subsidies always distort markets and IMHO, never produce net benefit, just redistribute to the well connected. The only point I would like to correct in the post regards “spinning reserve”. In the US, there are all sorts of reserves and you can’t trust the words. Some spinning reserves qualify if they can produce power within 15 seconds, like emergency generators. So there are not large stations literally spinning at idle. The way most “spinning” reserve is handled is to always have at least a 10% overload rating available so that other, slower to respond plants can be brought on if there is a major outage. Wind, Solar, and especially Tidal, are very predictable, even if they are variable, so they do not cause extra spinning reserve to be operated. Mikkel (03:05:39) : Nice theory, but the idea of people buying their energy directly from generators and treating the utility as simply a delivery company failed miserably. California’s utility bankruptcies were caused by that theory. And nobody was able to get enough more for green power to save it, so the state had to force everyone to pay more for it. People will pay a little bit more for green electricity, but not enough people will pay enough of a premium to expand green generation any more. Storage would make green power even more expensive and is totally unnecessary until there is excess green power during low demand periods, which won’t happen in any place there is significant population. This is NOT a simple problem to solve. The most efficient pumped storage systems lose 20% of the power put in to get it back later and there still is significant capital costs to build them. Supercritical (03:10:00) : I’d be happy to evaluate outlandish energy ideas, I do that for a living to some extent. Most fail on basic engineering. Ocean vents actually might be a reasonable idea for geothermal, but most are so far from populations that the cost of transmission would make it uneconomical, much like all the stranded gas that is used to produce LNG or methanol. JamesG (03:21:50) : Decommissioning only costs a lot because we have let the utopians demand that no risk is acceptable and everything must be restored to some fantastic pre-industrial state. I really don’t understand why we should ever decommission plants. They should be updated and expanded. We don’t demand that any other kind of enterprise restore their sites at the end of their useful life or even require a shutdown. Fundamental problem with your comment that we should pursue geothermal is same as main thrust of the article. If you “direct” money to someone’s pet project, it is going to force everyone else to pay more for their power than the coal or nuclear project could have sold it for. Roger Sowell (10:14:00) : If you represent renewable energy in California, you clearly have gone over to the dark side. 13% renewable energy has doubled California’s energy cost and manufacturing has fled the state.. If you represented the utilities and helped them get some nukes and coal plants sited, CA would still have a shot at avoiding bankruptcy. Ray (10:55:28) : What do you think engineers have been doing for the last 200 years? There is no hope in making anything but slow incremental changes in engine efficiency as better materials are developed, and even these have almost ground to a halt because NOx emissions increase as engine operating temperatures increase. Roger Sowell (13:09:25) : I don’t understand why you think the French Nuclear program is a “con”. You imply that they are subsidizing their neighbors by exporting nuclear electricity at a loss? I have never heard the French accused of irrational altruism. Ralph ellis (12:06:06) “Yes, but that includes nuclear physics budgets – answering the fundamental principles of physics. CERN is science, not nuclear power.” Thanks Ralph. Yet another case of the basic dishonesty of the anti nukes. We’re in a fight for our lives and the survival of our technological civilization which for all its faults is the best humans have ever done. No quarter, no prisoners. Mikkel (03:05:39) : You are right that we don’t always use the cheapest thing available. However, I don’t choose the cheapest jeans because I find the more expensive brand more comfortable. I don’t buy the cheapest food because I find the better brand are tastier. In my county, electrical users have the option to pay 15% more for wind-powered energy. I choose not to because I get just as many “green” electrons at my house as they do, and I don’t have to pay the “green guilt” stupidity surcharge. What I’m getting at is that it is a personal choice. The U.S. was built by people that were tired of being told how to live. When government takes that choice (read Liberty) from you, then government is telling you that you are too stupid to decide for yourself. You may not know Nancy Pelosi or Harry Reid, but I don’t want their type to be choosing for me! adoucette (12:47:13) :. Yeah. It’s called a drought… As you have observed, a large part of our renewable portfolio is dams and hydro. Unfortunately, some government geniuses believed the broken statements that we were going to have lots more storms due to AGW and dumped a Pot Load of water out to the ocean. Others dumped a load of water because they believed some fish needed it. Now most of the reservoirs are way low… just in time to need the water and power… The bottom line is that the wind, geothermal, and solar facilities don’t change fast here in California, but the rain varies dramatically. Its our climate, er weather, er, that changing ocean and rain / snow stuff… and the weenees who decided that we didn’t need to store water in our water storage system… Fuelmaker, Well, then, we’ll just have to agree to disagree. You state that “13 % renewables has doubled California’s energy cost and manufacturing has fled the state. “ Doubled from when? California’s electric power price is not that much different from the national average, per the EIA. And are you sure that power price is the reason for manufacturing fleeing the state? Surely it has something to do with high state income taxes, inability to retain workers due to high real estate prices, high cost of training workers who are functional illiterates due to the superb public schools in CA, and excess government regulations such as workman’s compensation insurance premiums, and unbearable environmental regulations? .” All of the ESS I mention in my blog, and above, work quite well. The only drawback is cost. And in many applications, as I wrote earlier, that 10 seconds of waiting without power for a diesel generator to crank up is worth the cost of a flywheel. You should be aware that SCE is installing large batteries for energy storage on Catalina Island, having just received a permit for those from AQMD. Batteries are not water behind a dam, nor refined fuels in a storage tank. “If you represented the utilities and helped them get some nukes and coal plants sited, CA would still have a shot at avoiding bankruptcy.” Are you really from California? If you are, and in the energy business, then you should know that new nuclear power plants were banned by law in California decades ago. More recently, new coal-fired plants were banned, effectively, but existing contracts to import coal-based power will be honored but not be renewed. How exactly do you propose that I help utilities get some nukes and coal plants sited, under those conditions? Can you obtain the votes in California to repeal those laws? California’s bankruptcy is not an if, but a when. Utilities will play a very small part, if any part at all, in that bankruptcy. AB 32 is the primary cause, as history will record. A busted state budget system, with no ability to borrow money, and the federal government no longer passing out money, plus an electorate that just voted down the tax-raising measures, will also contribute. I agree that California is headed for bankruptcy, and that is likely a good thing in the long run. This state is running up the deficit at the rate of $2 to $4 billion per month. No society can or should exist with 49 percent of the people paying the taxes, and 51 percent of the people controlling the votes. This place is long over-due for reform. Bankruptcy may be just what the doctor ordered. “I don’t understand why you think the French Nuclear program is a “con”. You imply that they are subsidizing their neighbors by exporting nuclear electricity at a loss? I have never heard the French accused of irrational altruism.” You are not alone in the confusion. Let me try to explain this “French nuclear con.” As I wrote on WUWT “Now That is A Commencement Speech,” (or similar wording), France now charges a low price for power, and exports some of that power. They can do that because their nuclear power plants were built at taxpayer expense, or in other words, subsidized. Where people get conned is believing that utilities today can build new nuclear plants, even to the extent of obtaining 80 percent of all power sold in a country, and charge 2 cents per kwh. That cannot be done without massive government subsidies, with a new nuclear power plant costing $10,000 per kw, and more likely $12,000 per kw with the NRC requirement to withstand an impact from a large commercial aircraft. If anyone does not believe this, please, take up my challenge from an earlier thread. Go to an investment company, ask them for $18 to $20 billion dollars for a twin-reactor nuclear power plant, tell them that no income will be headed their way for 6 to 8 years while you build the thing, and that you have a contract to sell all the power for 2 cents per kwh. Heck, make it 5 cents per kwh if you like. ralph ellis (06:48:39) : I agree with you that energy is not free and there is no such thing as a perpetual motion vehicle—at least none possible with current know how. What I was talking about was something said in the video I posted. Look from the 2:37 to 3:23 minute of this : Fuelmaker and Roger – so what do you think of the idea of wasting!! Roger I sincerely thank you for your reply. I have been frustrated by environmental regulators and regulations that generally have just obstructed real environmental progress and by the tone of your response, I believe we agree more than not. I haven’t tried to do any energy work in CA for many years. I did just go to the PG&E website and checked the general service rate for business schedule A-1. The price per KWh is about $.18. This is about twice the national average, more than twice what my business pays. You may be comparing the EIA numbers for wholesale power, which do not have all the subsidies loaded on. I agree that power price is just one other factor in a host that you mention why CA manufacturing has fled. I stand by my comments that utility scale energy storage will do nothing for renewables. Of course you are correct that they have a place for special purposes. I feel your pain that SCE had to get approval from the Air Quality Management District to install batteries. I am not from CA, 4 years was enough. I know you are but one man and most realistic organizations have given up on trying to get nukes or coal plants built, which would require repeal of several statutes. But when sensible people like you do not speak truth to power, we will continue to be led by ignorant fools. Of course nuclear power operating costs of 2 cents are misleading. But I still stand by my statement that France is not subsidizing it. They are likely only exporting when their native load is low, when no one else needs it and is only saving coal fuel costs of 2 cents. You can be sure that when there is a winter peak in the north and France has some to spare they will charge the going rate for peaking power with old gas turbines and charge 10 cents. Slowtofollow Wasting less is a great personal goal. I am a typical dad who turns off the lights and the AC and heat. There is no lack of information on how to save energy and no lack of capital to invest in economic replacements. It will not save enough energy to live off of renewables unless you triple the price. Economists modelling large scale energy provision sometimes leave out equations. If you owned a power plant that was penalised to subsidise wind energy, it would be a natural competitive business instinct to claw back that enforced subsidy if opprtunity arose. Consider this year 2009 extract from “Word Nuclear News”: Nord Pool, which has announced that from October the spot floor price for surplus power will drop from zero to minus EUR 20 cents/kWh. In other words, wind generators producing power in periods of low demand will have to pay the network to take it. Nord Pool said that “A negative price floor has been in demand for some time – especially from participants trading Elspot in the Danish bidding areas. … Curtailment of sales may give an imbalance cost for the affected seller and thus creates a willingness to pay in order to deliver power in the market.” This is likely to have a negative effect on the economics of wind power in the region, since a significant amount of Denmark’s wind power production is affected. WNN 1/4/09, Nord Pool 4/2/09. Yep, that’s right. Wind power is bought at negative prices in off-peak. Revenge is sweet. What’s more, we can expect this tactic to be used all over the world. Who ever expected that compulsory subsidies should be forever? Commerce does not work that way. In the UK and US we have big energy lobbyists trying to cash in on global warming by recommending those useless windfarms that will be torn down within 30 years. In Sicily, well, the mafia has decided they will cash in on it too “you should know that new nuclear power plants were banned by law in California decades ago. More recently, new coal-fired plants were banned, effectively, but existing contracts to import coal-based power will be honored but not be renewed.” Better buy your backup generators now. Upthread someone mentioned hydrinos Pragmatic (13:12:47) : And a program to investigate new physics should not be limited by political influences. If there are potentially viable low energy nuclear, or Millsean hydrino-type reactions that are repeatable – they should be openly funded. That sent me off on an untamed Anatidae bird pursuit that ended here: Which claims to be a university verification of their net energy out / gain. I can’t tell if this is bogus or real, so I’ve collected some of the links I ran into and ask the real / not real question. If anyone has clue, I’d love to share in it… This hydrino thing also came up under the infinite energy thread. Rather than take everyone here down this rat-hole with me, I’ve put my question in the following link: And anyone who would like to enlighten me is invited to do so… @Geoff Sherrington (21:32:06) : Re “Wind power is bought at negative prices in off-peak” This is great news for the energy storage systems developers, such as my clients. Rather than sell the power at time of production for a loss (negative price), one can store the power for sale at a later time at a positive price. I do hope they keep this up! @Fuelmaker, “I stand by my comments that utility scale energy storage will do nothing for renewables.” Wind-power that is integrated with pumped storage hydroelectric is utility scale energy storage, and it works wonders. @Pofarmer, “Better buy your backup generators now.” No need. Natural gas is abundant in California, thanks to the LNG regasification plant a few miles to the south in Mexico. Answer NO. Never seen so much [snip] outside of AGW. Perpetual motion indeed, not even the universe itself can manage that. I understand that it is difficult to accept the notion that a quantum mechanical view suggests that particles can just appear and disappear: but please understand that this isn’t real merely the best way we have of describing what happens in the real world in mathematical terms. Unfortunately the charlatans and mountebanks seize upon our ignorance and so have wondrous magical things not known to physics. And sell them to the credulous. It was ever thus because serious natural philosophers have to investigate every avenue: and can be so easily deceived by trickery. Was not the great Tycho Brahe the astronomer and astrologer to the King of Denmark? Did not Newton study the bible for clues . As I said in a previous post the best training for the inquisitive is study the art of stage magical illusion.: I mean no insult nor cast any aspersion, you are naturally curious, which is exactly what these quacksalvers depend upon to peddle their rubbish. But you are wise enough to ask others. And again the answer is NO. Its bunkum dressed up with pseudo scientific jargon designed to impress. So I hope you will not be offended by my bluntness. Kindest Regards Still invest if you wish. Does this answer your question? >> I can’t see the moon going away – so I think we are >>fairly safe relying on tidal sources as a big chunk. As I said in the article, tidal power is probably the worst of the bunch. Yes, it is predictable, but it also predictably switches off too. In theory, you can get four generating periods a day, but since generating on the inflow inhibits the resulting outflow, the current thinking with the Severn barrage is to generate only on the outflows – twice a day, or about eight hours generating. Now if you can run a city or a nation on eight hours of power a day, I would like to know how. Even worse, twice a month, those power peaks will coincide with minimum electrical demand, so is rather useless. Worse still is neap tides. It is surprising how many Green proponents love tidal power, but have never heard of a neap tide. During neaps (twice a month), there will be the square-root of not a lot of electrical generation. In summary, tidal power looks wonderful, but would destroy any technical society (unless we live and work exclusively in synch with the Lunar cycle). . >>What I was talking about was something said in the video >>I posted. Look from the 2:37 to 3:23 minute of this : I see what you mean, but you should have put more smily faces at the end of your post. I thought you were possibly supportive. The end of that info-mercial about the air-car is pure lies, and a decent consumer protection authority would shut them down immediately. It is half-truths and lies like this that get the Greens frothing at the mouth and demanding more renewables. Or perhaps the factory could demonstrate their miracle-compressor working, with no connections to the outside world whatsoever, no electricity, no petrol, no gas, no battery….. . >>This was very forward-thinking in 2004 and it reads, >>to me, like it was written yesterday. As soon as I saw this issue being politicised by the UK government, I could smell a rat and decided to investigate. The present UK government has never told a single word of truth since they took power, so if they were backing renewables it was 100% certain it was a pointless blind-alley. And the government were on form, as usual… And for the poster below, I did not include solar and geothermal in the article as they are not great sources for the UK, and this was written for the UK market. Anyone who has lived in the UK for a few years would not dream of using solar power here. You could try, but it is a great way to make a small fortune (but only if you start with a large fortune). Regards geothermal, yes we could get some residual heat here in the UK, but we have no obvious hot-spots that could be really useful. We have no Yellow stones here. Bath gets hot water (hence the name), but we are only taking 40 degrees cent. I might look into it further, but I think large extraction would be uneconomic and cool the small heat-plumes we have rather quickly.. >>Wind-power that is integrated with pumped storage >>hydroelectric is utility scale energy storage, and it >>works wonders. But only if the government underwrites it. The Dinowig pumped storage facility was the most expensive power station in Europe, especially as the Greens decided it had to be buried in a mountain (to be environmentally friendly). No commercial operator would dream of such a facility, it is a gold-plated government white elephant. And while you might think this wind-pumped combination may make sense with cheaper designs, it will still not power any more than a fraction of UK demand. You would need 700 Dinorwigs to power the UK for a week of zero wind. That is simply not feasible, neither economcally nor topographically, and so wind-pumped systems will only ever be able to provide a small fraction of UK energy requirements. So we come back to the same old problem. Renewables cannot provide base-load electrical supply, and so will only ever be the froth on the cappuccino. (All presentation, and no taste or substance.) . Roger Sowell (22:41:23) : 26 05 Roger, you are intentionally missing the pint. The point is that nobody likes bludgers who exist only because of subsidies taken from genuine others. They will claw them back over time.). I would agree for electrical energy, however evacuated glass tube solar water heaters DO work. Even on cloudy/winter days some heating is provided. for example (will take a long time to get your money back!) Truly free energy :o) @Geoff Sherrington (04:56:44) : “Roger, you are intentionally missing the pint. The point is that nobody likes bludgers who exist only because of subsidies taken from genuine others. They will claw them back over time.” I believe I understand the points made. From your dislike of subsidized power plants, you would then agree with me that nuclear plants should not be built. .” We differ on this. It matters greatly to the owner of an intermittent power plant. Why would one double the size of the installation? I would size the thing to produce the required power, store up the excess, and release the excess upon demand. Yes, storage brings a loss. Generation in a fossil or nuclear plant brings a loss, too. It appears your remarks refer to thermal storage, useful for solar, but consider pumped storage hydroelectric. …..” I have maintained all along that intermittent power, standing alone, does not and will not replace fossil power. No competent engineer says otherwise. But, intermittent power with adequate storage will do so. You might be interested in reading my views here: ralph ellis, That is a well-written article, and congratulations for having it published on WUWT. We obviously disagree on the major points, but that is fine. I welcome the opportunity to read the views of others, and to share my views. The U.K. must seize the advantages available to it, or as gamblers would say, play the cards you are dealt. Wind power with hydroelectric storage may not play a major role for the U.K. Yet, there is an enormous ocean current flowing past the islands, which should be seriously considered. An earlier thread on WUWT discussed this. Ocean current power plants do not suffer from intermittency issues, nor require storage. As an example, California has substantial hydroelectric and geothermal because our terrain is suited for those. However, we are not blessed with much ocean current, nor on-shore wind, so those technologies will never provide a significant portion of the state’s power. But, off-shore wind and wave power are a different matter. I would not count wind out just yet. There are some storage systems in the works that will suit many applications just fine. slightly off-topic, but Econcern, a large company in renewable energy projects (amongst which a windfarm on the horizon of a beach where I used to watch the sunset regularly; but also many other projects in the UK, Spain, Turkey etc.), has filed for suspension of payments in the Netherlands. Until recently the company was loaded with praise, manager-of-the-year-awards, a share in the 2007 Nobel Peace Prize and annual growth figures in the triple digits. I’m not a financial nor judicial mastermind, but according to the Dutch Wikipedia entry on suspension of payment (surseance van betaling), which unfortunately does not cite references, it works a bit differently in the Netherlands than in most Anglo-Saxan countries. Suppliers do not have to continue supplying the company, and banks can foreclose loans and mortages. So while in theory this move gives the company some time to find new investors or credit, it effectively ruins their ability to run a business. According to the same Wikipedia entry, 98% of all companies filing for suspension of payment in the Netherlands eventually end up bankrupted. No need. Natural gas is abundant in California, thanks to the LNG regasification plant a few miles to the south in Mexico. As long as you’re happy paying probably triple for electricity as the rest of the country. You’re already over double what we pay. From your dislike of subsidized power plants, you would then agree with me that nuclear plants should not be built. Would you please stop this semantics crap? There are some storage systems in the works that will suit many applications just fine. Unfortunately, the lights may start blinking before any of that becomes “prime time”. Why would one double the size of the installation? I would size the thing to produce the required power, store up the excess, and release the excess upon demand. So, all you’re saying is that you just design it double from the start. More semantics. Roger, I read your blog and respect your technical expertise. However, you are now using the law (force of the majority) to prevent the market from delivering the cheapest power. You probably wouldn’t support a wood burning plant because it isn’t perfect enough and it would compete for subsidies with your client’s projects. You unashamedly would like to see nuclear power banned! The only way CA is going to be able to live on renewables is to reduce it’s population by at least half. Is that really your goal? Native born Americans have been fleeing CA for about 20 years now. Frankly, I think you should be somewhat censored from this site except for quantified facts, because you are very good at arguing and misleading. fuelmaker, Well, thank you! I take your comment as a great compliment, re being very good at arguing. My mother agrees with you, by the way. I enjoy wood burning facilities, although our local air quality agency restricts burning because the soot aggravates the air quality. But for a primary fuel to produce power, wood is not a viable alternative. For some high-level perspective, it might be interesting to know that I work diligently to repeal California’s global warming law, AB 32. Until that repeal occurs, (and it will take a very long time, probably decades), we must comply with the law because penalties are rather steep for willful non-compliance. California will not live on renewables, but this state has demonstrated that renewables do not make life difficult. My lights work every time I flip the switch. Our power price is not the highest in the nation, either. @ pofarmer, “As long as you’re happy paying probably triple for electricity as the rest of the country. You’re already over double what we pay. “ As a matter of record, from the EIA, California’s electric power prices are only 20 to 30 percent above the national average. As compensation, net electricity use per capita in California is below the national average. At least 10 states charge more for power than does California. See Table 5.6.B., from the link below (scroll down to the bottom of the page): Having come into this debate late I notice that Roger Sowell has again been promoting his non ending agenda of anti nuclear bias, once again calling the French nuclear industry subsidised [based entirely on the fact that it is nationalised] and now he his stating that it is a con. I can only agree with Fuelmaker that his future comments should be “somewhat censored” on the grounds that he his prolific in arguing until the cows come home. In fact his constant theme is getting on my nerves and ruining many of the threads on this site. >>Yet, there is an enormous ocean current flowing past the >>(British) islands, which should be seriously considered.. . Chris Wood says: “The Greens oppose technical progress because it is necessary to destroy it as part of their political agenda to destroy capitalism and globalization. Green Peace ceased to be an environmental group a long time ago. It merely uses the environment as a cover.” No… They desire the elimination of the human race… Excellent analysis, though three items of rather great importance immediately come to mind: A) Solar is not mentioned, and it has a multitude of issues not the least of which are a low efficiency, challenges of reliability, and synchronizing supply to demand (during cloudy periods and overnight). B) Most of facilities for these “alternative technologies” are likely to be constructed far from the existing grid. We continue to see greens (in the US) opposing power lines, and we can expect more of the same as new transmission capabilities will be needed to move this “free energy” to where it can be used. C) Nuclear power does generate a waste product. And the US Congress (largely the “Democratic” contingent) continues to stall development of real options, as well as keep reprocessing of waste off the table. The latter option was eliminated in the end days of the Carter administration, while the US was promised alternative energy options “soon”, such as solar, wind and so forth. And those options were known then to be unrealistic, and today they remain not ready for prime time. An interesting assessment of wave, tidal, and ocean current energy technologies from NREL: “Status of Wave and Tidal Power Technologies for the United States” dated 2008. This references 22 papers/sources for further reading. Well there’s avery simple way to look at any “alternative energy” system. You tell me what your favorite energy plant is; lets say coal for example. How many gigaWatt’s operating power do you want lets say 1gW. So I’ll build you your 1gW coal plant, and give it to you for free; so please don’t anybody mention economics; I just solved that problem for you. What if I also give you a mountain of coal, say enough for 100 TWatt-hrs of energy production. Now after we get the preliminary paperwork taken care of, you can sell your power for whatever the market will give you and make yourself a fortune; how cool is that ? Oh the paperwork ! I’m not really as altruistic as I may have seemed at first; I too would like to get filthy rich; so here’s what I want you to do for me; a sort of thank you gift for the nifty coal plant I gave you and the mountain of coal. Before you can sell some power to get rich, I want you to build me a duplicate of the plant I just gave you; and replace my mountain of coal that I gave you. What you have to work with; and ALL that you have to work with, is the energy that is coming out of your plant; plus all the raw materials in the universe in their natural state. So you are going to need steel and concrete and other construction materials; those aren’t available anywhere but in the ground, so you will have toi mine them and refine them. Aren’t any tractors or bulldozers available for you to do that; you are going to have to build those too, since all the existing ones are in use by other people. You are going to need people to do all this work. Everybody is busy doing some other job; you’ll have to get someother people from somewhere else and train them and house them and their kids, and feed and clothe them; educate the kids. Well you get the idea; your energy source that I gave you has to do everything required to replicate the system, and then if you have any energy or coal left over, you can sell that at market prices. I realize that you are only going to need the bulldozers etc and some of the people for how long it takes to duplicate the system; so you only have to cover the energy capital and expenses during that period. Most so-called alternative energy “sources” fail this test; they cannot duplicate themselves; so they actually are energy wasting schemes, not energy producing schemes; and they need to be nipped in the bud before precious resources are wasted on a pig in a poke. One final comment. Wind/biomass/hydroelectric etc are all “Solar”, just like PEV or solar thermal steam turbine systems. They are all limited by the 168 W/m^2 global average solar energy insolation. One thing is for sure; starting from the fig trees and the gathering monkeys; we somehow pulled oursleves up by our own bootlaces to where we are today; and mostly it was done with stored chemical or nuclear energy. We don’t have any evidence that we could get here without the stored energy; which means we don’t have any evidence that where we are is even sustainable by renewable green energy. So those who want to eradicate fossil and nuclear stored energy sources; and rely solely on renewables at 168 w/m^2, are living in a fantasy land. George ralph ellis (14:01:25) :. Yet when I mentioned thr japanese idea for getting uranium from seawater using a polymer you said no problem! To satisfy the French [current nuclear requirement] will require 2.1*10^6 tonnes of polymer. sea water has 3*10^-3 gms uranium in 1 cu metre so the polymer will need to see 4*10^12 cu metresof fresh sea water over the year. The polymer will therfore have to be sunk into a deep water current and all 2 million tonnes dragged up from many meters down. Expanding this to the world: World electrical requirement 16,830,000,000MWh France electrical requirements 451,500,000MWh of which nuclear = 78% ==352,170,000MWh this requires 2.1Mtonnes of polymer The world requirement is therefore for 48Mtonnes of polymer to extract uranium from sea water. This would need to be placed in a/many strong natural currents. As you said Mr. Ellis “You play with Mother Nature at your peril”). Reminds me of when my mom took a trip “home” in July. Upon return (having lived in California for 18 years I guess memory fades…) she announced annoyance at the rain on the 4th of July but did say “We were there for Summer this time, July 18th it was.” … ;-) There are some places where solar is just incredibly stupid. Places that are overcast and raining any / all days of the year are among them. My red-head transparent skin gene is an adaptation to get Vitamin D from almost no sunlight. Think about it… Solar is great anywhere that folks naturally had brown or black skin. It is not so usable where the folks are white skinned historically. It is darned near useless anywhere redheads were the dominant phenotype… (This rule does not hold for places like Australia, Texas and California where a whole herd of us have moved in despite the skin cancer “issue”… we’re talking pre-modern age population distributions…) Now if you can ever develop cloud and fog power … 8-} Pofarmer (22:11:16) : Better buy your backup generators now. Fortunately (or maybe unfortunately…) I already have TWO of them from the last time the “Government decided to help” with my energy choices. At that time they demanded that the energy companies have NO generating capacity, NO long term contracts, and buy ALL electricity on the spot market from moment to moment… Yeah, mini-bar prices and buying a banquet for 20 million off the appetizer menu. Worked about as well as it sounds… Now they are “helping” by demanding that we not use the major base load low cost reliable sources of energy. Yeah, that’s the ticket… Any wonder I’m not so keen on the government running businesses? So yeah, I’m prepared. I did take out all the mini-UPSs, but I can put them back (had one in each room for the electronics, clocks, and one light – gave me time to start the generator and swap over to it without resetting clocks or interrupting the TV show… Yes, we had blackouts that often. Pretty much any day the temps were high and any time somebody didn’t make a spot buy Just In Time…) I probably ought to buy a couple of more cans of gas and make sure the Big Boy still starts. We have a small one (Honda 1 kw – just love it. Size of an overnight bag, weight is near nothing. 56 db when running (quieter than people talking who run about 75 db). Runs forever on a quart or 2 of fuel (8 hours?) and my spouse can start it with a gentle pull. Enough to run the fridge (modest sized, efficient), entertainment cluster, and lights in each room of the house (compact fluorescent) along with the misc stuff like laptops, printers, etc. with some left over. and… We have a large one, the Big Boy. 4 kW continuous, 5+ kW surge. Briggs & Stratton. Noisy as all get out, but can run on darned near anything close to gasoline. Used for running the washer / dryer set and HV/AC plus any major power tools if needed – like the electric lawn mower. My spouse can not start it (and it’s a bit of work for me…) but it was darned cheap (I paid about $300 for it on clearance prior to the power “issues”…) If I were doing it over again, I’d get the Honda 3 kw multifuel (gas propane natural gas) job with electric start and plumb it into the natural gas line. Then, if we had enough “issues” I’d install the battery box and inverter that I bought (but didn’t use since we changed governors…) and just run the house off storage and let the generator cope automagically with what charges the battery box; grid or it. (Need to buy an automatic power controller… no big, or just float charge the batteries from a grid charger and let the generator kick in if the power fails…) Welcome to a day in the life in California… (And they had to send a delegation to Nevada to find out why folks moved there… Can you say STABLE POWER from Hoover Dam?… and those nice nuke plants just outside Phoenix … and the coal slurry line that feeds the coal plant near Laughlin Nv. straight from the mine in another state, and…) So yeah, I’m “prepared”. Need to make sure the Big Boy starts (haven’t run it in a year or two, but stored it dry.) and need to find where I put the (empty) battery box and inverter kit; but ready enough. BTW, on one occasion I was involved with putting in computer rooms around here. On the large new Sun campus, whole buildings were being put on standby and cogen generators (about the size of 18 wheel truck boxes) sited next to the parking lots… so that the company could keep running even with the (frequent) rolling blackouts. It doesn’t take long for companies to figure out that relocating is cheaper than duplicating all the infrastructure that they can expect to be stable elsewhere… I’d hoped all that was behind us now, but no… ralph ellis (01:10:30) : So we come back to the same old problem. Renewables cannot provide base-load electrical supply, and so will only ever be the froth on the cappuccino. (All presentation, and no taste or substance.) While I agree with much of your point vis the UK; your statement is just not true for the rest of the planet. And that’s a big place (so the less we use, the more fuel is available for you ;-) For example, the solar plant in the Mojave desert is almost exactly matched to the peak demand in California (both daily and seasonally) since that is from AC demand. We (California) can add a boat load of solar before there is any storage problem and in fact it reduces the storage problem we already have (freeing the pumped storage we have for use with wind, for example). For the UK, your abundant WAVE (not tidal) power is available substantially year round and is, IIRC, stronger in winter when the large storms hundreds of miles away make the waves (even if you are under still air). Great for you, useless for Arizona 8-) That the UK is going for wind or solar makes no sense to me at all. Wave and geothermal ought to be good, though. I would also expect there to be somewhere with strong currents off shore where ocean current generators could provide base load year round (a system like this is proposed for the Gulf Stream off Florida). The UK ought to be able to put in a trash to Diesel facility (such as made by RTK Rentech) since I suspect you do have trash to dispose of.. Renewables are very usable, if custom choices are made for each location. They are incredibly dumb if used in a one-size-fits-all way. I also would not think of powering a country off of them without a decent chunk of nuke and coal for at least the next 2 or 3 decades as the markets sort it out. Hey! why don’t we build a satellite that will follow the sun around the globe gathering energy 24/7. Then of course, we would need ships and trucks to follow it around on the earth transferring the energy to power plants. They, of course would use electricity to move themselves as they follow the satellite around. Oops! they would probably use up all the energy they collected! So, back to the drawing board. Hmmmmm! ralph ellis (00:50:05) : Regards geothermal, yes we could get some residual heat here in the UK, but we have no obvious hot-spots that could be really useful. There are two major kinds of geothermal. Large steam electric plants that need near volcanic heat; and ‘ground source heat pumps’ where you put the pipe in the dirt near the home and use the 56 F or so (15? C) as the heat source for a heat pump to warm the home (rather than below 0C air…) The ground source heat pump ought to be a ‘feature’ (though they are ever more of a feature the further below 0C the air goes …) Just as a heat pump is a more efficient home heater than burning the fuel directly, a ground source heat pump is even better. Using a cogenerator device to make heat and the electricity to then run a ground source heat pump is even better, but by that time you’re talking a lot of hardware…. I’ve wondered from time to time if anyone has ever tried a halon flood. You can make very dense halons that ought to stay below ground even if heated. The other thing I’ve thought about is borated water. (Plain water can undergo ‘water gas’ reactions in coal fires, making CO +H2 … not good. I suspect that a slurry of borate would prevent this.) It sure looks to me like a couple of hours with an extinguisher chemical engineer ought to find something interesting… OK here’s the deal: If anyone wants to do this, I’m up for working in a new start-up dedicated to extinguishing coal fires for the CO2 credits. We get R&D funding from the AGW movement for a varieties of proposals to investigate {every known and a few unknown extinguisher formulas} and apply for patents on any that work in coal fires (process patents, if needed due to prior art). We then package the CO2 credits based on the projected CO2 output from the next 200 years of fire and sell them as each site is extinguished. There’s a LOT of coal fires and a LOT of CO2 credits to be harvested… I wonder if there is a Halon equivalent of a silicone oil… silicon backbone with oxidation inhibitor functional groups on it… When looking at houses back in late 2001 we were pleasantly surprised to find the one we wanted had a 10kw propane generator (4 cyl) attached to the house. It’s in a small, insulated shed with it’s own exhaust pipe and vent for the radiator. It’s a small industrial model. I was told by one of the guys who services it that the City of Tacoma (WA) has the identical generator and relay panel as a backup foir their water treatment plant. Apparently the elderly previous owners had some medical equipment that needed power 24/7. Out here on Whidbey Island wind storms can take out power for a day or two at a time. So the large propane tank (I think it’s 120 gallon, not sure) will last for weeks at constant usage. So if things really went to hell, we could save it and turn it on only for a short time every couple days as needed, probably last for months. If anyone’s interested, here are pics of the generator. bill (16:50:43) : ralph ellis (14:01:25) : Yet when I mentioned thr japanese idea for getting uranium from seawater using a polymer you said no problem! Wasn’t him. It was me. I still say No Problem! (I’d say “Hell No” but that would probably get snipped…) 8-) [tedious pointless math with big numbers deleted] The world requirement is therefore for 48Mtonnes of polymer to extract uranium from sea water. Yup. No big. Look, you seem over impressed by big numbers. The world is a very big place. That’s 2% of present world polymer production for 24 years. (You are not thinking of building this out all at once are you? That would be silly. Not enough nuke plants yet and too much land sourced U to justify that. Heck, even 24 years is pushing it…) So polymer source is not an issue. This would need to be placed in a/many strong natural currents. As you said Mr. Ellis “You play with Mother Nature at your peril” Well, it actually can be placed in many slow natural currents or even modest tidal areas. It takes a year to soak out the U so you don’t exactly need a swift river here.. Now. But hey, I’ll spot you an order of magnitude… Divide your tons by those miles and you get 800 tons / mile or about 7 ft per ton. Given their demo device was, IIRC, per foot. Lets call it 75 m. The other dimension was 30 m again IIRC,…) Now I have no expectation that we would actually do this. It’s a “thought experiment”. There will undoubtedly be better engineering solutions than actually putting a bow on the whole planet. (Personally, I’d expect OTEC to be used to provide free pumped water – Ocean Thermal Energy Conversion) and I’d expect that a longer ribbon run deeper would be used. A continuous belt so the lift energy on one side is offset by the submergence on the other. And finally, I don’t think we’ll actually power the whole planet this way for about 100,000 years or more due to all the U and Th available on land and the use of advanced reactors that have a 300% to 400% better fuel efficiency (but hey, I’ll spot you a 4x for that too…). I’m just showing that its possible in a worst case. And it is. The purpose of this thought experiment is just to show how puny the impact would be on the ocean currents. Miles wide and thousands of feet deep ocean flows will not be upset by a few dozen feet wide and deep ribbon that is mostly empty space. Even if the numbers are big. E.M.Smith (20:08:28) :That’s 2% of present world polymer production for 24 years. Opps. My bad. That ought to be 1% of production. But heck, I’ll spot you a double on the polymer too… To put things in perspective on the size of the polymer and it’s impact on the ocean in an ocean uranium m no, bill, putting a plastic blankey in the water is not going to be a problem. Especially not if we can then stop shoving 2 BILLION tons of oil through it at 10 times the speed of the current while burning oil to do so. (IIRC it was 1 kg / 350kg of filter / year) that ought to give about 714,000 kg of U per year. At $150 / lb (high for now, but we’re talking when the land stuff is scarce) that’s $107,000,000 per year or about $293,000 / day. Day rates on a tanker are often in the $50,000 scale, so we’re in big profit land. I’m also pretty sure we will not be using a significant part of that 714,000 kg of U to power the nuclear engine for a year. Golly! I think I’ve just invented another category of ship: The Uranium Miner Or in this case, the Ultra Large Uranium Miner, or ULUM It looks to me like it could be very profitable. IFF uranium ever gets out of the basement of depressed prices… (Who do I talk to about patenting the ULUM ? ) For those curious about the impacts of Geothermal Energy production, here are some of the results of Public Scoping for an Environmental Report done about twenty five years ago. The project was to be on the southern end of the Mendocino National Forest located in northwestern California north of Clear Lake and was proposed to be similar to the existing Geothermal development at the Geysers on the south side of Clear Lake. Bear in mind that many NIMBY’s respond to this type of scoping as well as dyslexics that cannot tell the difference between a National Forest and Natural Forest and people that associate Mendocino with booming surf and giant redwoods. The proposed project is located some fifty miles from the coast in a semi-arid area supporting mainly brush and scrub pine trees. The geothermal areas are along faulted serpentinite zones that sometime grow rare magnesium tolerant flowers and the area has the remnants of several old hot spring resorts dating from the late 1800’s. These two areas of possible impact were not addressed by the scopees. Objections fell into four main groups; Visual: In addition to power lines, cooling towers and turbo-generating buildings, the surprising main visual objection were the clouds of steam that would rise hundreds of feet into the air as it does at the Geyser Project. Noise: The sound of the steam turbans is not unlike that of a jet plane and can be heard for miles. There were also complaints about the huge drilling rigs used with their roaring diesel engines and large bits grinding through hard rock for months on end. Pollution: The steam distillate often contains heavy metals that can pollute streams, lakes, wetlands and groundwater and must be hauled to a disposal site. Intrusion: The project would intrude on the “Natural” Forest with large roads needed for access by the gigantic drill rigs and the large work and building pads which are often located low in the canyons near streams and would destroy the pristine nature of the forest.. The project never came to fruition. I’ll admit up front that I have not actually read the entire thread of comments, so mine might have already been covered. However, a loyal reader at Atomic Insights told me there was an interesting conversation here and suggested I might want to offer a few thoughts on atomic energy. While skimming through the comments, I noted that there are some people, like Steve Goddard, with some serious misconceptions about the potential benefits of atomic fission compared to coal combustion. They claim to be concerned about nuclear weapons proliferation and/or the long term hazards of accidents like Chernobyl. I think Steve even claims family effects from the accident, though he is not very specific about what they were. My personal witness is that I once spent about 700-800 days (total, broken up into 11 different periods) sealed up underwater with an atomic power plant that provided all of the power needed for heating, cooling, computing, lighting, entertainment, cooking, and propelling for a 9,000 ton submarine with a crew of 150 people. That power plant was a steam engine heated by a tiny volume of fuel that could fit under my office desk. That fuel lasted for about 14 years under rather heavy service – the ship had two crews and spent most of its time at sea. For about 400 of those underwater days, I was the Chief Engineer, so I had a pretty comprehensive view of the technology, training and people required for safe operation in some very challenging environments. Having had that experience, it is impossible for me to accept the negative comments that people make about nuclear power. That has been reinforced by a lifetime of up-close and personal experiences plus formal academic and technical training with coal, oil, gas, storage batteries, solar and wind power. Atomic fission produces TINY amounts of waste that can be readily isolated from people and stored indefinitely. No one has ever been killed or even injured by accidental exposure to stored used fuel from a power plant, even though we have been handling it for more than 50 years. It can be dangerous if not properly handled; that is why we train operators to make sure they understand the simple principles of time, distance and shielding. I am confident that we will be able to keep teaching people how to responsibly handle the material as long as humans have the ability to speak. If we lose that ability, all bets are off anyway. I am not reflexively opposed to fossil fuels, but having seen what I have seen with fairly primitive nuclear technology, I cannot help knowing that fission has a hell of a lot more growth potential than fire. (Note: That amazing power plant I described above was designed in the 1950s, less than two decades after the basic physical phenomenon of self sustaining fission was first proven in a lab experiment.) Humans have been using controlled fire for at least 10,000 years and we have been living off of the earth’s stored capital of concentrated hydrocarbons for nearly 200 years. In many places around the world, we are starting to scrape the bottom of the stored fossil fuel reservoirs. In others we are choking on the waste that is naturally produced and released to our shared atmosphere and waterways. However, we do have a very rich, powerful and entrenched industry that extracts, transports, refines and markets fossil fuel. That industry HATES effective competition and has worked for at least 50 years to tie fission down. In my mind’s eye, I see the fission power industry as being like Gulliver in the land of the Lilliputs – it is tied down with a bunch of easily broken threads by a bunch of self-important but ultimately powerless people. There is a good reason that the organized opposition to fission power has been able to exist for so long – there is a natural alignment in goals with some very rich pro-fossil fuel interest groups. They both like high priced power, both want to restrict new sources of energy, and both are “afraid” of nuclear technology. The amount of stored fission fuel capital is thousands of times larger than the hydrocarbon capital and it provides a higher quality output with a much smaller volume of waste that can be readily isolated. One incredible source of stored fission fuel capital is what many people call “spent fuel”. Another one is what is often called “depleted” uranium, a third one is a completely different metal called thorium, and another one is the world’s current inventory of atomic weapons. Of course, there is also an incredibly energy rich inventory of uranium left in the earth’s crust and dissolved in ocean water. One more comment before I leave the stage – nuclear weapons and the technology needed to make them already exist. There is no way to pull the knowledge out of the heads of the people who have it and no way to control all of the needed materials. Their existence and even their ownership is less of a subject worth worrying about than efforts to ensure that the weapons will NEVER be used against people by ANYONE. MAD (mutually assured destruction) may seem like a terrible strategy, but it has worked so far to make sure that even the most aggressive people recognize that a decision to launch will probably be self-defeating and even suicidal. Rod Adams Publisher, Atomic Insights Host and producer, The Atomic Show Podcast E.M.Smith (20:49:28) : Have you checked whether this polymer is reusable? Last time I read it was an ion exchange resin that could be flushed clean of accumulated U and usedary a newbie here, but it was quite clear from the moment I started reading this blog that Roger Sowell is no troll, and is in fact a font of information on matters pertaining to energy engineering and matters legal. I don’t agree with all of Roger’s conclusions regarding the costs and benefits of nuclear power vs. renewables, but his analyses have to be considered, and they provide a valuable perspective. The subject is a complex one, especially when you consider the variations in economics and technology all over the world. As to whether France ‘subsidizes’ its government-owned nuclear-power industry, that’s an empirical question that depends on (a) the meaning of ‘subsidize’ and (b) whether, all things considered (e.g. building and decommissioning) the power plants actually earn a profit. Maybe the question is worth a separate thread here, if someone wants to do the research. In any event, the calls for Roger Sowell to be ‘somewhat censored’ (whatever this may mean) seem to me to be entirely inappropriate, and requiring an apology. /Mr Lynn I agree that Roger is no troll. There are no two people who agree on everything. If we must have unanimity here, there will only be one commenter. I wonder who it will be? Mike I never said Roger was a troll, just a very slick lawyer. The only censoring I think is appropriate is to stay reasonably close to OT. Characterising nuclear as subsidized is pretty far from the truth considering all the taxes, fees, and permitting required, especially compared to the renewables. And the whole energy storage argument was very misleading. I would be happy to hear the cost of the Catalina energy storage per kwh and it’s efficiency. I’m sure he knows this well, and I’d guess it’s at least 10 times the cost of Bath County Virginia, which was built to better use the baseload nukes before additional units were cancelled. That project was built for about $300/KW I think and returns 80% of the pumped energy when needed. It runs whenever the power value is 10% more than the predicted average and recharges by pumping whenever the price falls below 10% less. And the lakes don’t need to be replaced after a certain number of cycles. I wonder how long it will be before the Catalina project joins the long list of failed, dismantled “demonstration” projects. Rod Adams (23:15:59), Great post. Hope to see you here often. Fuelmaker above – please could you comment on the rough sum I did above for the payback on nuclear? And I second the views re: no censoring. If people state opinions that others disagree with then let each side present evidence we can all learn from. Nuclear Subsidies, in the United States: An excerpt: “Congress created a $20 billion loan guarantee program for constructing new nuclear power plants; a $2 billion subsidy for developing uranium enrichment facilities in the United States; $2 billion in risk insurance for nuclear power plants facing delays due to regulations or public opposition; a $1.3 billion subsidy for decommissioning older nuclear power plants; $1.2 billion in reactor research; a $0.018 per kilowatt hour subsidy for electricity produced by new nuclear power plants; and liability protections worth billions of dollars.” [bold emphasis mine — RES] Well, then, if true, that 1.8 cent per kwh subsidy should provide zero cost power…no wait, since it only costs 1.7 cents, the nuclear power plants will be paying me 1 penny for every kwh I consume. [tongue-in-cheek off now…] David Porter (11:53:03) : Having come into this debate late I notice that Roger Sowell has again been promoting his non ending agenda of anti nuclear bias, once again calling the French nuclear industry subsidised [based entirely on the fact that it is nationalised] Well, nationalization is the ultimate form of government subsidy. Once you’ve pumped so much money into something that you own it, it’s kind of hard to argue that it’s not the recipient of your money… Look, I like nuclear. Not a whole lot, but more than a lot of the junk out there. It works. It’s manageable. It makes nasty crap at the end, but the stuff is not nearly as bad as the anti-nuke folks make it out to be. NONE of that changes the fact that the French have a very long history of subsidy for many of their large industries in order to achieve social goals (like employment) and strategic goals (like maintaining nuclear expertise even though they are a relatively small country… they do have a nuclear bomb program to support…) So if you want to throw rocks at the question of French subsidy, do that and don’t throw rocks at the messenger. I’m quite certain you will find a large pot of French Government Money flowing into their nuclear program to subsidize it. That will rise as a fact. Then we can move on to the question of was the subsidy good or bad… (I can argue both sides of the subsidy is good / bad case. Required of Economists since when you graduate 1/2 will work for the side saying subsidy is needed and 1/2 will work for the other… My school dutifully prepared us for whichever side gave a job offer. 8-) My personal opinion is that in the case of France, the subsidy was a great idea. Bomb program gets good supply of trained staff and materials due to larger economies of scale. France rises to the top of a profitable global industry. Rakes in buckets of money from places like Iraq. Establishes a secure energy foundation for the country. Provides a strong economic advantage to every OTHER French industry via low power costs. Keeps all the voters warm and in well lit homes. Keeps coal smoke out of the French provinces … and noses… Consider the alternative: France with little or no oil, coal, etc. Firmly dependent on OPEC and related for every drop of fuel they use. Massive coal imports and burning making lots of pollution (including heavy metals getting on those wine grapes…) Economic competitiveness bouncing up and down with the oil price (YIKES!) as billions of Euros flow from their hands to OPEC. A terribly expensive bomb program to support with little supply of staff and materials from a large scale industry – the whole thing would need to be on a small scale inefficient basis. Prices for electricity to homeowners and businesses rising rapidly. Unemployment and economic depression leading to social unrest. Gee, they would be just like the U.S.A., but without the domestic coal powered electricity providing stability… Oh, and the French air would have a distinct odor of coal and oil fumes… I can only agree with Fuelmaker that his future comments should be “somewhat censored” on the grounds that he his prolific in arguing until the cows come home. If that’s the criterion, then 90%+ of the contributions here need to also be “somewhat censored” (which is sort of like being “somewhat pregnant”…) In fact his constant theme is getting on my nerves and ruining many of the threads on this site. Look, I find the nay saying about nuclear a bit tedious too. I *want* nuclear in America. But his contributions are a valid part of the debate. If you don’t like his point of view, then develop a counter argument. Save it in a text file and every time he posts “Foo”, you post “Bar”. Quick and easy. Both sides will be aired and folks can decide. The bottom line for me is that in California a large part of what Roger says is exactly “spot on”. It is an Alice in Wonderland place with insanity in the electrical system as the norm. Nuclear is (even if it were allowed by law) not economically possible here. Just think of the fact that the “nuclear-approved” by the NRC materials are often forbidden here by the CARB and other state agencies. Degreasers and solvents normally used to clean parts (prior to, for example, welding – a critical step…) are ILLEGAL here. So you think that might raise costs a bit? So I’ve tried to find grounds to say Roger is wrong, and I have not been able to do so. The closest I can come is a weak argument (so I’ve not put it forward) that in some other theoretical place where governments didn’t interfere with national agendas and regulations someone theoretically could make a cheap efficient reactor. Oh, and the corollary that maybe some new magic reactor design will do the trick “This Time For Sure!”. All hypothetical hand waving. So I’m waiting for an existence proof. So far all we have are national systems with hugh government money flowing into the pot largely contaminating the economic picture for political agenda reasons (broader economic support or bomb programs. National energy security.) Russia: USSR built system 100% subsidized in construction. Bombs too. France: Social services and bomb program justified subsidy. Fuel security. Strategic support for world economic competitive posture in other industries. Fuel cycle supported by the government. England: Bomb program. Mixed economics due to this. Confused at present. Not much new to base a cost analysis upon. Most reactors fairly old and built under different regulatory schemes. Some decommissioning cost data coming out, and it supports Roger… USA: Heavily subsidized R&D for bomb program. Some non-subsidized commercial nukes; mostly in a far gone past with vastly lower regulatory hurdles. Nothing recent to base any evaluation upon. Fuel cycle subsidized by government (mandatory disposal by government). Decommissioning costs still speculative so no life cycle costing possible. Japan: Heavily promoted by government for reasons of strategic energy security (similar to France. Not much else to use – no oil, coal, etc.) You don’t see a lot of Japanese reactors popping up outside Japan, so I have to guess that the product doesn’t sell well outside the Japanese economy. We must wait and see. Since they bought Westinghouse, with a nifty new small efficient reactor design, we may get an answer to the hand wave about new reactor designs… China: System build with bombs in mind as a government product 100% paid for by the communist government. Slight movement into market based system. Too early to tell where it ends up. Fairly loose regulations, though, so it may turn into an existence proof of fast economical construction of new plants – if they don’t blow up or melt down… India: Heavily government supported industry. Same reasons as France. Not much choice, given the very low levels of any other energy source in the country. (They have something like 1/3 the world’s Thorium though… thus their being the leader in Th reactors and U233 bombs at the moment). Bomb program and fuel cycle involvement by the government too. No life cycle costing due to young life of industry (decommissioning costs show up 50 to 100 years later…) The rest of the nuclear programs in the world similarly are government run often for bomb making with electric power as a cover. Israel. Iran. The Old Iraq. etc. The only one that I find interesting and NOT in this mould is Canada. Not a bomb making country. Lots of other energy available. Yet they designed, built, and sell the CANDU reactor. Yet a lot of the early design and R&D stage was government subsidized. So that is where I hang my hopes. Unfortunately, recent sales have dropped. CANDU reactors are cheaper and more efficient than many others. They run on natural (i.e. not enriched) Uranium. You can put a Th blanket in them (that’s part of how India did “things”…) and they are sold in a global market. Yet these folks: and several other sites, say that it’s been at the expense of $13 Billion to $16 Billion of subsidy… and that isn’t even counting the overpaying that might be going on from the buyers side in places that want one for the indirect purpose of following an India / Pakistani style path to nukes. The CANDU allows online refueling and breeds more Pu than most, so it’s “better” for a backdoor bomb program than most other “power” reactors. you need to do a lot of “refueling” to get a short fast cook of the fuel rods, but hey, what’s an aspiring global dictator nuclear wanna-be gonna do? It’s just a little money… So there is the unanswerable taint of clandestine bomb program subsidy potential in those “free market sales”. We have the existence proof of India too… At the end of all this, I’m left with no good “counter” to Roger’s position that nuclear is simply not economical sans the ever present hand of government promotion. That it is a bad choice when based only on issues of energy economics. If you have a better argument, please make it as I desperately want one! Until that time, saying you want the messenger silenced because you don’t like the message and it “gets on your nerves” is a bit much. Getting old gets on my nerves, but that doesn’t mean it will stop happening to me and aging is a lie. Roger Sowell Did you read the article, this is over 20 years! So lets add it up shall we, 20 Billion loan Guarantee – FREE as far as I know there were no defaults. 2B for enrichment Research and Development = Science – you do not think the Government should fund science? 2B Risk insurance due to “facing delays due to regulations or public opposition” which would not be required in environmental activists did not have such an anti-nuclear bias that reaches into REGULATORY DOMAIN. 1.3B Decommissioning is because the Government promised to create a national nuclear waste storage facilty and are required to pay this as part of not meeting its legal obligations in finding a suitable location. Again the blocking of the Yucca Mountain site by environmentalists. 1.2B in Reactor Research – Again funding science is bad? The PTC is nothing compared to the current one for renewables and as far as I can find out ran out years ago and was implemented to off-set a taxation burden on the Nuclear Industry. All Governments provide maximum liabiltiy protections that are FREE unless a disaster occurs, all it does is limit the liability of the operator to a set amount. No disasters no payments. We are Talking about 150B over 10 years DIRECT INVESTMENT in renewables on top of the 10 Billion spent in the Bush Years, not even close to them same amount of money and we ended up with 20% market penetration with nuclear and we maybe can get to 5% renewables. Geoff Sherrington (03:58:26) : Have you checked whether this polymer is reusable? Last time I read it was an ion exchange resin that could be flushed clean of accumulated U and used again. Yes, it can be reused. Don’t know the total process (flush and re-dunk; or flush, melt, reform and dunk; or flush chemically reprocess…) so I just left it as unstated. But most likely you would have a few of years lifespan out of it. I’d guess 5 to 8 years based on other plastics. Even at the end of that, though, the majority of the material is still suitable for making into new plastic. The idea that the quantity of resin needed is somehow too big is just out of touch with how large the global economy really is. As I showed up thread, we could fit it in a small percentage of the worlds present oil tanker fleet.’s harvested and swap in some new processing chemicals. That you might need to reform or replace that resin every few years is just not very significant compared to everything else. It’s down in the fractional pennies on the dollar… Roger Sowell, Well, large and small hydro are killing off the fish, so we’ll wait until the regulators side with the fish and start tearing those down. You are certainly aware of the internecine warfare going on regarding water. As far as I know, the Altamont wind farm is not tied to any storage. It is a disaster. Calfornia’s Energy Commission website published its last Wind Performance Summary Report (required by law, by the way) for the 2000-2001 period. I cannot get the data to write my representatives on this lunacy. By the way, the Altamont wind farm did kill a member of the public. And those raptor kills are really causing heartburn. Not sure how rates are calculated, but my 131%-200% of baseline is 24.7 cents/KwH. 201%-300 is 35.4 cents/KwH. I hit these routinely in a relatively new house all tweaked up to Title 24. As an electrical engineer, these are embarrassing figures. And as you mention, the colonial power known as California is trying to dictate the fuel mix for electricity generated outside the state. Wait until the ratepayers experience the full magnitude of that nonsense. That, together with the budget fiasco, should leave no doubt that Sacramento is inhabited by fruits and nuts very much disconnected from reality. Roger, The PTC for new nukes is for 8 years. The life of a new nuclear plant is probably 80 years. Arthur E.M. Smith: It is great delight to read your well-informed and level-headed posts and web site (including stock trading part; gilty of same passion too). I would like to add, that 2005-2008 renaissance in uranium exploration (it is mostly busted right now) yielded tremendous discoveries of high-grade uranium reserves virtually everywhere where exploration was allowed (up to 20% of yellowcake) . We are, effectively, centuries or even millenniums from even thinking about ocean uranium extraction. But it is nice to know that we have huge explorable uranium reserve in oceans. Second, as I am aware of, every singly country which acquired nuclear weapons did it on dedicated “research” reactors, not commercial electricity generating units. Third, previously built nuclear power stations were life line for survival for numerous E.European and post-soviet countries, when their economies collapsed after disintegration of Soviet Union, when these countries did not have single dollar in their coffers to buy NG, coal, and electricity from abroad. Some examples: Ukraine – 47% electricity nuclear, Czech – 30%, Bulgaria – 32%, Slovakia – 54%, Hungary – 37%, Lithuania – 65%, Slovenia – 42%, etc. And one interesting news: “Canada is poised to sign a deal with India to sell nuclear technology and materials…The pact will open up the lucrative Indian market to Canadian nuclear exports for the first time in more than three decades…Under the deal Canadian nuclear exports cannot be used for military purposes…The estimation is over the next 20 years, something like anywhere from CA$50 ($44) to CA$150 billion ($133 billion) worth of civil nuclear energy needs are what we’re looking at…AECL signed a deal earlier this year with a leading Indian engineering firm to start costing out the ACR 1000s — the prelude to a possible sale…Cameco Corp., is also poised to sell uranium to India.” adoucette, Very interesting site, thank you for sharing that. PTC (production tax credit for nuclear plants) is described here: You may be right about 80 years life on a new nuclear plant. We typically designed oil refineries for a 30 year life, and many of them are still running after 60 years. They had/have extensive inspection and replacement of various portions when those portions no longer met specifications. In the case of nuclear plants, ultimate lifetime is determined by the NRC, who either grants or denies extensions of an operating license. arcs_n_sparks, I “feel your pain” at the electric meter. I suspect that the electric rate structure in California is designed to encourage solar PV systems. I have an associate in the Los Angeles region who also pays in the 25 cents/kwh range at his home for incremental power, and is installing PV to eliminate that charge. He hopes to bring his power usage down to the base level. The economics look much better when the power saved is at 35 or even 25 cents, compared to 14 cents. We have a similar rate structure as you described, here in Southern California Edison territory. I don’t know how close the nearest pumped storage hydro plant is to Altamont, but down here we have a couple of them along I-5 near Santa Clarita. Our nearest wind-farm is at Tehachapi, about 40 miles away. In general, I agree with your assessment of the geniuses in Sacramento. But, they are moving in the right direction on one point, SB 696. We had a state judge rule that our AQMD issued invalid pollution permits, and that ruling affects not only businesses but power plants. SB 696 will overturn that ruling, and allow the utilities and IOU’s to build again. It is not yet law, but the current version of the bill may be found here: Climate Heretic (12:31:34) : My broader point is directed at those who steadfastly maintain that nuclear power plants do not receive subsidies in the U.S. To one of your specific points, “2B for enrichment Research and Development = Science – you do not think the Government should fund science?” Government is so good and efficient at what it does, one can only wonder if any research should be funded with tax dollars. Perhaps private funding through universities would produce better results. An example of tax-supported science research is the Hanford site in Washington state. Hanford has cost the taxpayer $30 billion over the past 20 years, and consumes more tax dollars at the rate of $2 billion per year for cleanup of radioactive wastes. That government-sponsored science research produced plutonium for the first atomic bombs. The article linked below states: “The federal government now spends about $2 billion each year at Hanford, or roughly one-third of the total nuclear cleanup budget, to rid the site of radioactive and toxic waste.” That would indicate that $6 billion per year of taxpayer dollars is spent on cleaning up nuclear wastes. E.M.Smith (11:59:10) : Well-said, sir, well-said. See this link from the Wall Street Journal for a few reasons why a government cannot run a business efficiently. I believe this is not behind a pay-wall. A nuclear powered ship doing, oh, 3 knots? would take darned near no power at all to run. I’d like one that could do 20 knots or better top speed. Handy for staying out of the way of storms or navigating in one. BTW I call the enviro plans The Great Leap Backwards Roger wrote: My broader point is directed at those who steadfastly maintain that nuclear power plants do not receive subsidies in the U.S. Roger, The US has a long vested interest in nuclear research for Military purposes, both from a weapons and as a source of power for propulsion of its subs and carriers. Of course a lot of that research has been used to help develop the Nuclear Power industry, but its not the same as a subsidy. Hanford was all about ending WW2, so its cleanup shouldn’t be tied to the Nuclear Power industry either. Arthur adoucette, How about this, then? Is this close enough to government subsidy, and is the $18.5 billion sufficient? Government money, propping up the nuclear power industry that would otherwise wither away and fail…yup, sounds like a subsidy. “Last October, the department [DOE] received 19 applications from 17 electric power companies seeking a total of $122 billion in loan guarantees to build new reactors, far more than the $18.5 billion Congress has provided in loan guarantee authority for nuclear power plants. . ..” [bold emphasis added – RES] [ Victoria is the site of the infamous South Texas Nuclear Plant] This is just one such article, as they appear almost daily. It is quite amusing that new nuclear power plants so obviously require government funding, yet there are those who insist that that industry is self-sufficient. If this does not provide adequate proof, then I don’t know what will.. With respect to performance under accident conditions, the CANDU is limited by the water inventory, which is many times larger than LWRs proportional to power rating, particularly when the shield tank inventory is included. It is also worth noting that the calandria itself is at STP, typically about 70 C. In short, there’s an enormous amount of cold water that a reactor has to boil off under accident conditions for any escape of radiation out of the fuel. What is agreed generally by the safety experts in the industry is that CANDU has about an order of magnitude less probability of radiation release to the environment than any other reactor type. It’s overkill in a sense, because the prospect of significant radiation release from modern reactors of any kind is trivial, as the Kashiwazaki-Kariwa earthquake two years ago demonstrated. With respect to plutonium, yes CANDU produces lots of it. However, this is somewhat misleading. Because of its neutron efficiency, most of it is consumed by the reactor during normal operation. What remains is mostly higher 240, 241 and 242. Pu239 composes much less than half of the residual plutonium in the spent fuel, making it useless for weapons purposes. Also remember, reactor fuel is uranium and plutonium oxide, not pure uranium and plutonium. Roger, surely even you can tell the difference between a loan guarantee and a subsidy. What you’re dodging around is the fact that all energy industries receive some form of government support, either direct subsidy, tax incentives or things like financing instruments. What matters is what is the return on the investment. In the case of nuclear, like tax incentives for oil and gas drilling, the energy production return is huge. In the case of renewables, the return is dismal. Roger, A loan guarantee does not cost a thing unless the loan is defaulted on. Given that the reason that nuclear power plants cost so much in the ’70s was a combination of high interest rates and environmentalist activism which greatly increased the amount of time that it took to get a nuclear plant up and running forcing the power company to take out more debt to finance the interest expense on their construction project long before they are able to produce any product to sell. Shoreham is the most infamous example of perfectly good nuclear power generators being run out of business because of construction delays caused directly by activists. By offering loan guarantees it lowers the interest rate that the nuclear companies have to pay, which lowers the expense of construction a nuclear powered generator. The construction cost is the biggest expense in building a nuclear power plant, so by lowering the expense of construction, the savings are passed on to ratepayers. Difference between nuclear and wind or solar is that nuclear power could exist alone without the aid of wind or solar, but wind and solar could not exist without coal, nuclear or some huge hydro-electric dam. People do not like coal powered generators for various reasons but coal is not going to be replaced by any wind farm, only nuclear energy has the potential to replace coal. Assuming that CO2 does cause Global Warming, which the politicians and fruitcakes are constantly telling us, the only option for a future energy needs is nuclear power. There are no “Alternatives” to a base load generator unless we want to go back to the 19th century. Personally, I do not care. We have a coal plant here and I have never noticed the thing. The lights stay on and the system seems to work. If it ain’t broke, why fix it? Johnnyb, Moreover, the government often charges for offering loan guarantees. This is typical in the case of export credit agencies which provide either loans or loan guarantees for exporters developing power plants offshore in other countries. In fact, in the case of reactor sales, the revenues to government from the loan interest and loan guarantees have often exceeded the government net revenues from the sale of the reactor itself. Fact is, most government owned export credit agencies (EximBank being something of an exception) are in business to generate a profit. Fact is, nuclear power is necessary even without phony global warming excuses. 200 plus reactors in China and India in planning stages testifies to this. Why? Coal transport costs are prohibitive at the volumes required, and they don’t have the California option – namely foisting your power production requirements on someone else. For all of Roger’s hand-waving, it’s always interesting to note that the world doesn’t agree with him at all. Nations such as China and India who need lots of electricity aren’t wasting any significant effort on any of his remedies; they’re building coal and nuclear. What do you suppose they know that Roger doesn’t? After all, it’s their money, effort and time. Oh yeah, a little thing called facts and reality, not green fantasies. Colin (08:53:36) “What matters is what is the return on the investment.” Please could you comment on this note to Ro!!” also re:CANDU – “However, its greater capital cost is offset by its lower fuel costs; no enrichment of fuel and no conversions steps to and from UF6.” – please do you have any details on annual fuel costs for a CANDU assumed to be runing at 100% capacity? Thanks in advance. Roger Sowell (08:07:26) : Loan guarantees are one thing. The direct subsidies for wind and solar are much worse. Much, much, worse. It is all part of The Great Leap Backwards Roger, Victoria is NOT the site of the South Texas Nuclear Plant. That plant is in Wadsworth which is about eighty miles from Victoria, Texas. I’m not sure why you’re calling it “infamous” since it has operated for many years flawlessly. I do know that there were terrible construction delays because of a critical shortage of engineers before and during it’s construction. The construction should never have begun with less than half of the construction documents completed and no interference studies done. I AM hoping that the nuclear plants will be built here in Victoria but I’m not holding my breath. I believe this administration will not approve anything that makes this much sense. Mike Bryant, (13:34:29) : You are so right that the STNP is not actually in Victoria, but I was following the wording from the article I cited. It may be that not everybody is up on the small towns in Texas, like you (and me). I lived there for my first 35 years. My “infamous” reference is to the design/construction fiasco, not the subsequent operation. It does operate pretty well, actually. So far. ;-) The fiasco included design blunders and construction screwups that resulted in Brown and Root (predecessor to KBR) being kicked off the job, then Bechtel and Ebasco brought in to correct things and finish the plant. The delays caused quite a bit of heartburn for the city of Austin, which had their money sunk into the project, but no power when they needed it due to the seemingly interminable delays. They were scrambling to keep the lights on. It will be a cold day in a very hot place before Austin goes down that road again. There were lawsuits filed all around, and rightfully so. What a mess. As to new nuclear plants, very recently (maybe yesterday?) a Republican is now calling for 100 new reactors in the next 20 years in the U.S., with government guarantees to make it happen. Probably just grandstanding for his constituents and to make a headline, given the relatively few Republicans in both houses of Congress at the moment, and the Administration’s views on the matter. @slowtofollow (11:57:19) : You might be interested in the Finnish experience (read, fiasco) with their French-designed plant, also the Flamanville (FR) site is having troubles, see: In Finland, both the contractor and owner are reportedly filing arbitration claims against each other. That does not appear to be a happy job-site. Yes, I agree that nuclear is needed in the future with or without phony global warming. I do wonder if Roger understands that it is an either/or proposition. Either we build coal plants or we build nuclear plants OR we learn to live without electricity, there are no “alternatives.” Johnnyb (14:35:47): You say there are no alternatives. Are we to dismiss, then, all those natural-gas fired power plants that presently produce around 20 percent of all power in the U.S. as phantoms, as mere hand-waving? Also, all those hydroelectric plants with huge lakes behind them are also phantoms, more hand-waving? (Bonneville, Hoover Dam, Glen Canyon Dam, TVA dams, Three Gorges Dam in China, and many others around the world) How about all those geothermal plants that reliably produce 5 percent of the power in California; are these just a gleam in an engineer’s eye? I am presently holding in my hands a report from Geothermal Bulletin, published by Geothermal Resources Council, May/June 2006, pg 90, that shows 24 countries having geothermal power plants in 2005, with an aggregate capacity of 8,933 MWe, from which they generated 56,786 GWh/y. That is certainly a very small percentage of all the power produced in the world, (less than 1 percent), yet it is there. Are you certain that our only choices are coal or nuclear, and nothing more? >. Spoken like a true person who doesn’t know what the hell he’s talking about. The reason that Uranium prices are so easily manipulated is because there is so little of it – but even with the manipulation, the cents/kilowatt hour that was added to the cost of electricity was minimal. Uranium is approximately 1/10th the price of coal – it would have to be 400 dollars/kg to be the same as 20 dollars/ton of coal. At this price, there are millions of tons of uranium available for mining. Combined with thorium or breeder reactors, we could go for thousands of years on nuclear alone, not even counting the possibility of alternative fusion sources (like the IEC engine, or helion, or focus fusion, or general fusion) Although I have worked in the nuclear industry on and off all my life I am not convinced that it makes economic sense, at least in the UK at the moment. Coal is cheap and abundant and if you don’t care about CO2, and I don’t, you have a well proven system albeit coal fired plant is expensive. Natural gas plant is much cheaper to build than coal and the fuel even cheaper but the cost of shipping the fuel as LNG expensive in capital investment. In the USA which has huge gas reserves and can build pipelines, also expensive, I would think this was the cheapest way to go for the next thirty years anyway. And of course if you have the gas distribution infrastructure, which the Uk has, you can move to small scale CHP systems and adapt them to supply the grid as well. Again the UK has a very sophisticated grid. So for the UK this is an ideal solution which is cheap to build, makes the best of the fuel and ofers the flexibility with modern control systems to meet variation in demand. And supply can be met with LNG. Nuclear fission maybe in the long term but IMHO you do not rush to use a well proven but aged technology when a cheaper, newer and better one is at hand and is well proven too. You use that and invest modest sums to see whether the new nuclear tehnology can be turned into a commercially practical solution: and if it can then decisions can be made. Kindest Regards Good post, a jones. I think nuclear would be very inexpensive if there were not so much opposition. When you’re talking about tying up several $billion, while every possible step is litigated by people with nothing to lose, it’s no wonder that nuke plants are expensive. But if 20 – 30 nuke plants per year were built on a standard design, costs would plummet due to economies of scale. Nuclear energy isn’t magic. It’s just another way to extract energy. Pretty much unlimited energy, at that. But the eco-terrorists will fight nuclear energy every step of the way, making energy much more expensive for the consumer. It’s risky making predictions. I almost never do it because of Niels Bohr’s and Yogi Berra’s dictum: “Prediction is hard, especially about the future.” But I’ll risk it here: electricity brownouts and blackouts will become more common. And it will be 100% the fault of the frightened eco-Luddites and their enablers, who will blame everyone but themselves. “”” Colin (08:47:16) :. “”” Colin, Can you steer us to a definitive treatise on CANDU ? When I was in School, CANDU was a gleam in somebody’s eye; but I do remember that it was supposed to have a number of good features; and from what you have said here; it sounds like it does. I think the natural Uranium usage is a rather valuable feature, eliminating all manner of expensive, and somewhat potentially hazardous paraphernalia like centrifuges. I nearly went the nuke direction career wise; and it would be nice to get back up to date on where the technology has gone. George a jones and smokey above – yes, good points. I’d like to flag up the paper the UK National Grid have done on the possibilites for gas supply from waste: Heat is a big energy req. (approx 50% of UK energy OTTOMH) and there is a lot to be done on this esp. re: reducing waste. Re: nuclear – report from Poyry highlighted approx 13GWe potential from industrial CHP: About 8 nuclear euro spec installations worth of elec at a fraction of the cost seems like a good deal to me. Ironically at Pembroke there is a scheme on the stocks for 2GWe of CCGT! And what about coal? Ratcliffe on Soar still going strong according to EON with a refurb and flue treatment retrofit….: At the moment I don’t see how the case for nuclear stacks up. Maybe if it could be built for 25% it would make some sense (?) but IMO at the moment it needs hidden subsidy to make it “viable”. A lot of the talk about nuclear being subsidized, or uncompetitive, is based on comparing it to fossil fuel plants, as they are currently regulated (which is to say, hardly at all). Look, everyone understands that fossil fuels (particularly coal) are and will be somewhat cheaper than nuclear, given that their (massive) external costs are not counted, and they continue to be allowed to pollute the air for free. Most studies on the external (i.e., public health and environmental) costs of various energy sources (such as the ExternE project at) agree that fossil fuels’ external costs are enormous, and that they would be more expensive than nuclear or renewables if they were counted. Coal’s costs would more than double. Such studies show that nuclear’s, as well as renewables, external costs are negligible (a fraction of a cent/kW-hr). All waste management/disposal and plant decommissioning costs are already fully included in nuclear’s power cost. The govt. pays nothing (no subsidy). Any suggestion that the cleanup of old nuclear weapons sites (which were contaminated before the first commercial power plant started) constitutes any kind of nuclear power subsidy is outrageous. We’ve already decommissioned several plants in the US, and we have a very good idea of what it costs. The ideal energy policy would be to simply cap or tax CO2 emissions, and let the market decide how to respond, i.e., determine the most effective, least expensive way to reduce emissions. I personally think we should do the same thing for oil and gas imports (from the Middle East and Russia) because the geopolitical costs of these sources are significant. A healthy tax on all of coal’s other pollutants (which kill 25,000 Americans every single year) are also in order, but CO2 limits may take care of this problem anyway. Any subsidies, such as tax credits or loan guarantees would be given out equally to all non-emitting sources (including nuclear and most renewables). Such an approach would automatically answer all of the economics questions most folks here are talking about. Let the market decide. Let’s see what happens. I would think that Mr. Sowell would agree to such a policy. It should be noted, however, that renewables are far more subsidised than nuclear, and have been for some time. For years now, conservation and renewables have been getting a larger amount of federal R&D money than nuclear: Also, in terms of overall subsidies, including direct operating subsidies, renewables like solar and wind have been getting over 20 times the level of support that nuclear does. See Table 35 on page 106 in Chapter 5 of the following link: And the tilt in the playing field among non-emitting sources (towards renewables and away from nuclear) is becoming far more extreme, with the new US administration and congress. an outright mandate to generate 15% of electricity with renewables by 2020, along with a 5% mandate for conservation. In other words, an infinite subsidy. By contrast, there is no nuclear mandate (of course), and the (much larger) nuclear industry only gets $18.5 billion in loan guarantees from a previous 2005 bill passed in the Bush years (it gets absolutely nothing from the recent bills). Renewables are generally significantly more expensive than nuclear, and in any event are limited by intermittentcy. None of the biased policies listed above would be necessary if they weren’t. (Even with the cost overruns, the Finnish plant remains cheaper than the other non-emitting alternatives; and by the way, the Fins are ordering another new reactor, despite how everything’s going.) As for fossil fuels, the fact that they are less expensive than nuclear is irrelevant, given that their free emissions into the environment will no longer be allowed. The real cost comparison would be between nuclear and coal or gas with full sequestration. I am confident nuclear could compete with that. I’d like the posters who question nuclear’s viability (vs. fossil fuels) to provide a comparison based on full CO2 sequestration for the fossil side. Of course, with the ideal, even-handed policy I describe above, none of us should have to prove anything. We all have the right to our own opinion, as long as we agree to abide by the result of the market test. I personally don’t share Mr. Jones’ optimistic view on future natural gas supplies, but hey, he has the right to his own opinion. We should still be able to agree on a policy that allows even-handed competition on a level playing field. Then let’s just see where the market goes. @ Roger Sowell (15:40:41) : You mentioned natural gas, hydro and geothermal as alternatives, but do you really believe that any of them can scale to replace coal? In the US, it has been longer since the last major hydro installation than since the last major nuclear power plant completion. We have essentially built about as many dams and associated lakes as possible from an economic and environmental standpoint. Even the ones that we have are only as reliable as the long term weather patterns allow them to be – there are some reservoirs in the US today that are at historically low levels due to drought. I also bring my perspective as a native Floridian to this discussion: hydro requires not only water, which we had in massive quantities, but also elevation variations, which were non existent for the entire state! Natural gas appears to be relatively cheap and plentiful today, but were you sleeping during the last 10 years as gas prices reached the equivalent of 8-15 cents per kilowatt hour (just for fuel) for sustained periods of time? By electrical generating CAPACITY, natural gas is more than 2 times as large as nuclear power in the US, but the plant owners choose to limit the amount of electricity they produce each day. By PRODUCTION, nuclear and gas are neck and neck at 20% of the total. Even at today’s prices, the fuel is still quite a bit more costly than coal or commercial uranium fuel on a per BTU basis. Because of the tight balance between supply and demand, gas prices tend to shoot up if there is an increased demand – supply lags way behind the price increase. If you want to replace coal combustion, there are several options. Natural uranium fission, low enriched uranium fission, recycled uranium fission, plutonium fission, and thorium fission (indirectly – it actually gets converted to U-233 before fissioning). All of those alternatives can be enabled with little government money, but all may require at least enough government support/protection to allow progress to continue without too much interference from people that LIKE high natural gas prices because selling natural gas is how they make their money. According to the utility industry leaders that I have talked to, the MAIN reason they want government loan guarantees is to put some public “skin” in the game. They do not want a repeat of some of the industry’s early experiences with nuclear fission where they made good faith investments in major projects for both power production and fuel recycling facilities only to have the government referees join the opposition team for short term political gains. Rod Adams Publisher, Atomic Insights (Oh yeah, I forgot to address geothermal. I guess that is because it is such a small scale contributor that it is easy to keep it as an afterthought.) Jim above – are you talking US or UK when you say all nuclear decomissiong costs are already fully covered in current nuclear operating costs? That is not my understanding of the UK situation where I believe the £70bn plus is to be met from the public purse. I think it was only on this condition that a buyer was found for the UK gov. stake in BE. Re: subsidy of renewables – one element in the UK has been to provide a stimulus to establish the industry through paying a premium to non fossil fuel sources. My understanding is that this has also been paid to nuclear. Also – agree the need for proper total cost accounting and higher energy prices – if the cost per kWh was increased by 4 this is approx the same as decreasing build by the same factor. However you are putting high emphasis on the anthropogenic CO2 argument – it could be its significance in climate and its role have been overestimated. Certainly there are significant “anomolies” in some of the evidence. Please read the work I referenced above from Poyry (“Securing Power”) for the potential of industrial CHP as electricity generation source. This is proven tech in the UK on industrial scale (1.2GWe at one location I believe) and should be IMO getting serious attention cf. nuclear and new CCGT. I share your concerns re: gas supply security for the UK – again the National Grid proposal referenced above looks worthy and workable. And both of these would also score well on the CO2 reduction front…. @slowtofollow – A major portion of the reported decommissioning costs for UK nuclear plants is related to their dual use nature. Extracting the weapons usable material from production reactor plant fuel has historically been a bit of a messy process that results in contaminating a large amount of machinery. In the States, our commercial plant decommissioning processes have been quite a bit less involved than some people expect because the amount of contamination and the volume of contaminated equipment is much smaller than in the case of the MAGNOX plants. Contrary to popular belief, we have experience with the complete decommissioning process for at least two and perhaps more former commercial plants that are now essentially greenfields. (Shippingport and Yankee Rowe are the two I am thinking about.) Rod Adams Publisher, Atomic Insights Rod above – thanks – do you have any links/references re: the dual use and decommissiong issues? Are you saying this decommissioning method is a choice and that things could be done for (significantly?) less? @slowtofollow – Here is a link to a good summary of the UK’s atomic energy program and the impact of the particular technology employed with regard to decommissioning costs. Much of the difficulty is not due to the method chosen for decommissioning, but to the use of magnesium oxide coated fuels in CO2 cooled, graphite moderate gas reactors. Another large portion of the quoted costs are related to the clean up of nuclear weapons production facilities that are not related to power generation. Neither of these major cost drivers are relevant to discussions about future reactors and their eventual decommissioning costs. Rod Adams Publisher, Atomic Insights @Rod Adams, “. . . natural gas, hydro and geothermal as alternatives, but do you really believe that any of them can scale to replace coal?” Yes, of course these can replace coal. One must examine the resource availability.. Another commenter made the point earlier that perhaps China builds nuclear power plants because their coal requires transportation. Perhaps the cost to build railroads and transport the coal is prohibitive. That could be the case, and it is certainly a consideration in Southern California, for example. Even if California state law did not forbid new coal-fired plants, the rail infrastructure here has no additional capacity for the traffic. One is then faced with the prospect of mine-mouth coal-fired power plants, and transmission lines to bring the power to the user. This is much the same problem as exists for solar, wave, and wind power. You mentioned natural gas price earlier, and I assure you I am quite aware of energy prices, and the various factors that affect them. The fact is that LNG prices, and by extension, natural gas prices, will be very low for the foreseeable future due to intelligent investments made years ago that are just now coming on-line. Supply presently exceeds demand, and much more supply is coming to the market in the next few months. Transportation costs for LNG are now very low thanks to ExxonMobil’s new generation of large ships. see this reference for natural gas prices:. see: Geothermal is also competitive, and getting more so as the drilling companies continue to reduce their costs per 1000 feet drilled. But my firm belief is that natural gas and the oceans will provide the energy beyond any ability for nuclear to compete, unless nuclear continues to be subsidized. Natural gas is already far cheaper than nuclear, and will remain so for many decades. The world is awash in cheap natural gas.. Natural gas, hydroelectric, geothermal, and off-shore renewables also share a characteristic that nuclear power will never have: they are not ultra-hazardous, but nuclear power is by legal definition. One shudders to think of the ramifications of a terrorist crashing a large aircraft into any of the 400-plus nuclear power plants around the world. In contrast, a wayward tanker taking out an off-shore wind-turbine would hardly make the news. 1 – Well, nuclear plants have been sized against full-sized plane impacts since the beginning – and their inherent design (reinforced concrete round and spherical shells backed by pressure-vessel-sized steel-lined circular shapes) resists impact damage. (The Trade Towers, for example, were sized only against small plane impacts. Were rectangular shapes of glass-walled open beams and tie-rod concrete slabs 1000 feet high.) 2 – Geothermal is ONLY valid in a few localized highly volcanic areas and earthquake-active areas, every area of which is already emitting sulfur and CO2 and noxious gasses in near-lethal, very irritating quantities. Plus highly corrosive (and lethal!) contaminated ground water The steam is low-quality, only modest pressure, and must be carefully used at rates to keep from “cooling” the heat exchange area too fast. In other words, take too much power from the ground – faster than it gets exchanged underground, faster than the ground water seeps through the seams and cracks to the steam source, or faster than the steam source exchanges heat to the conducting water-steam heat exchangers, and your production goes down. Did I mention your steam generators and heat exchangers rapidly corrode out and get blocked up by erosion and contamination? Geothermal has limited use. In limited areas. Sometimes. If nobody tries to live near the steam vents and sulfur emissions. And other dissolved leachates like arsenic, copper, and iron, uranium, radium/radon, etc. ALL ocean-sources systems are very low density, and can ONLY be used near the coasts. NONE are now economically feasible. What precise “new” designs do you know of that will suddenly make them competitive? I cannot NOW even stop draining freshwater lakes in the US southeast when they are emptying due to Enviromentalist REQUIREMENTS that mussels in saltwater (brackish) mudflats 600 miles downstream “might” be harmed if freshwater flows are reduced. (Neer mind that previous droughts (before the 1940-1950-1960-era dams were built!) might have stopped freshwater flow completely. Since these lakes are the ONLY source of drinking water to inland cities with rela people, but cannot stop the draining, how do you propose getting ANY permission to use or build ANY tidal or ocean power systems? That is, now, enviro’s don’t care about thirsty cities (or 50 million innocents killed by denying them DDT in Africa and Asia) compared to the “possible” harm done to mosquitoes and mussels. What makes you think they will change their tactics for “power production”? 3 – In contrast, a wayward tanker taking out an off-shore wind-turbine would hardly make the news. BS. Tankers (commercial ships of any size) can’t get into the shallow water where wind farms need to be located – but it doesn’t matter. Kennedy and the east coast liberals DON’T ALLOW them to be built offshore. The extemism and hype about oil tankers means any story about a collision means instant comment. Unless teh tanker is rescuing foolish enviro’s trying to sail into the north pole – and getting into trouble. Or sailing into antarctic seas and sinking THEIR ocean-going ship with thousands of tons of oil. 4 – “Regarding the oceans, off-shore wind, wave, and ocean current power plants have zero fuel costs, zero decommissioning costs other than dismantling for recycle, and very good availability factors.” BS Tides START by only a 50% availability factors – half the time of every lunar “day”, the tidal forces are either zero or near-zero since the CHANGE in tidal motion (usually only 6 to 12″ inches in 90 percent of the world!) stops every half cycle. Worst, tidal forces come at cyclic times regardless of what electric demand is – so you can’t use them to meet peak demand. You must keep cycling on and off (extremely hard and expensive on power plants’ long-term lives!) regular power plants to catch up to the weak tidal forces. And NO tidal mechanism actually owrks. In the two or three spots in the whole world where tidal forces actually DO move enough water to make it seem economical, enviro’s won’t let you build a Zieder Zee-area of dams to trap the water. (The dams would damdage the mud-flats and trap fish.) 5 – . ” BS. “Wind power” does NOT exist in enough quantity ANYWHERE (nor can it be built in enough quantity t “pump” back uphill from below a dam to behind a dam to refill the lake behind a power plant./ Have you heard of this thing called “friction”? The second law of thermodynamics? Gravity? Pump efficiency curves? Pump energy requirements? Water CANNOT flow “uphill” from below miles of the river below a dam to the pump suctions at a dam to go back uphill and over the dam to refill a lake. Robert A Cook, PE (09:52:15) : Excellent rant. I LOL’ed a couple of times. By the way, I respect your PE credential, and I do not have one. But I do have some knowledge of the things you wrote about, friction (I presume you refer to Reynold’s numbers), Second Law of Thermo, gravity, pump efficiency curves, and pump energy requirements. All those things were elementary subjects where I took my degree in Chemical Engineering, and I used them regularly in my engineering career. I still do. So yes, I believe I know of what I speak. By the way, water certainly does flow uphill (after being suitably pumped) into a lake. You might be interested in a pumped storage hydroelectric plant (PSHP) near Los Angeles, which uses Pyramid Lake and Castaic Lake for the two reservoirs. Power generation is around 1,500 MW. There are others, this is not an isolated case. I have read about a Chinese PSHP one where river water is pumped to a lake on a plateau above, reportedly more than 1000 feet above the river. You might also be surprised at the wind power in the U.S. Great Plains area…plenty of power to pump water anywhere. The U.S. DOE completed a wind energy survey of the U.S., at 50 meters above ground. It is pretty easy to find by doing a search on DOE and Wind Energy Map. Along the same lines, the U.S. MMS (Minerals and Mining Service) included wind, wave, and ocean current in their latest 5-Year Plan. They are preparing leases in the appropriate areas off-shore, and have companies lined up to bid. As to renewable energy designs, you might be interested in what the EERE website has to offer. NREL has information also at Regarding geothermal, you might be surprised on this, too. I recently attended a meeting of chemical engineers (Los Angeles chapter of AIChE) where we were treated to a presentation by one of the world’s most experienced experts in geothermal power plants, Mr. Art Krugler, PE. Art has had PE registrations in chemical and mechanical engineering for more than 50 years. Geothermal power has come a long way from the scenario you described above. see: I encourage you to look into these matters. None of what I wrote is BS, rather it is TS, where T stands for True. To continue looking at your exaggerated (and falsifiable) claims for alt energy. 1. “Hydroelectric in developed countries is exploited to a certain extent, but there is more power available in river-run systems. There are also plenty of sites in less-developed countries that should be built.” BS. NO (economically feasible) hydropower locations in ANY developed country remains “undeveloped.” NO economically viable “pumped storage” location is available for (legal) development in ANY developed country. NO hydropower location ANYWHERE remains “unopposed” either in any undeveloped country either, but that is YOUR problem as an enviro extremist to change. “I” am not ALLOWED to build additional hydro-power sites anywhere without [b] extreme [/b] international environmental opposition. (See Three Gorges Dam, for example – even in a Communist-dominated socialist state, the UN opposes protecting lives against flood damage.) 2. .” Absolutely false. NO resource is available uniformly distributed around the world – LEAST of all the physically-limited fixed-location “resources” of solar power, water-power, tidal power, geothermal power, wave power! EVERY (uneconomical and extremely expensive!) “resource” ONLY exists in limited areas, within limited timeframes at those limited locations, and requires immense “areas” of environmental DESTRUCTION to begin to harvest even limited amounts of power inefficiently! For example, NOTHING can grow under, in front of, or behind a solar panel. To cover 1 km x 1 km of ground for a solar area kills every living thing under the panels. (Well, cockroaches could survive in the newly shaded area of asphalt and rock. Until they starve to death.) You can’t live under nor near (several km distance) wind turbines – too much low frequency noise. Put them offshore, and Ted Kennedy sues to remove them – and you need km after km of expensive undersea power cables to connect them to shore-side transformers and power lines – also forbidden by enviro’s by the way. (And a complete WASTE of men, material, and resources needed elsewhere.) Worse,as discussed above, wind farms are no more than 18-22 percent effective: You need to build FIVE turbines to average rated power from ONE, and that ONE equivilent is itself only 38% efficient! But you need to maintain all five to get the (supposed) power from ONE – AND still have a 100% conventional power spinning in reserve for that ONE. SO you’ve built 6 plants (5 wind and one regular) to get the power from ONE. Since the regular plant is (usually) ten to twenty times larger than the one wind mill, the actual number is much less. So why build and waste money maintaining ANY wind mill? Solar radiation is EXTREMELY limited by latitude, weather average conditions, snow and storm loading, clouds, humidity, and ground slope. NO solar installation can pay for itself outside of southern AZ, CA, and NM. Even there, they need 50% of their price subsidized. You “could” claim that the power created from your non-existent, non-profitable, hard-to-locate alt energy sites could be “shipped” – but ONE150 mile transmission line across WV recently 14 YEARS to get permits and building OK’s. Just HOW am I going to cover 1/2 of south AZ to power areas for 6 hours a day – when you won’t let me build power lines (VERY EXPENSIVE POWER LINES!) anywhere? Worse, I lose 1/2 of the power created trying to transmit electric power firther than 800-1200 miles. THAT is wasted heat load itself. And a waste of resources. Your mythical “smart transmission grid” CANNOT change the I^2R losses – only tell where the I^2R losses have occurred. Your mythical “smart grid” is no more “smart” than claiming I can pump water uphill from a drying riverbed to refill an emptying lake …. Coal trains can economically move energy from the western and northern (shale oil) states into CA – despite your claims otehrwise. Crush, powder and fluidize-bed react the coal into gas, burn the gas in a CT and recover the hot exhaust in HRSG’s then turn a steam turbine with the resulting steam. 360 MegWatt at 67 to 72 percent thermal efficiency on 5-7 acres of land. Natural gas? Yes – even cheaper. No mythical creatures and no delays and no miracles. Simple, economical power. Delivered in two years. “Transportation” (of the irregular power produced) and “limited availability” (of the primary alt energy source) of your mythical alt energy sources ARE only half of their problems. The rest of the problems are in the very low efficiency of their basic conversion: Too much area needed, too much connection costs, too much losses, too many tens of thousand of acres needed to gather the energy in the first place. Too much interference with other, more valuable things. And, worst of all, NO NEED for the alt energy – sicne we are now in a decade-long GLOBAL COOLING cycle that shows no sign of abating. Since 1935-40 timeframe, we have MROE time cooling than warming – so what is your hysteria about a temporary 2/10 of ONE degree increase in temperature? @Roger Supply presently exceeds demand, and much more supply is coming to the market in the next few months. Transportation costs for LNG are now very low thanks to ExxonMobil’s new generation of large ships. Perhaps there is an excess in gas supply right now. But where do the ships unload in North America? What is the current capacity of all of the constructed and operating reception ports?. Just out of curiosity, Roger, have you ever spent any time off shore? Are you remotely familiar with weather patterns like the Bermuda High? I have had the wonderful experience of spending quite a few weeks on sailboats off of the Atlantic coast of the US. I can recall a number of days when we traveled less than 10 nautical miles during a full 24 hour period and where the oceans were calm enough for water skiing. When the Bermuda High gets entrenched that pattern can last for a week or more over a broad swath of the ocean. Only landlubbers believe that the wind always blows off shore. The trade wind patterns that governed travel during the age of sail have not been altered by technology. Rod Adams Publisher, Atomic Insights Robert A Cook, PE: Thanks above for all the food for thought. In the UK (and EEC) the big issue is that of replacing aging generation stock. And possibly on top of that limiting CO2 output. There is a lack of clarity over the exact nature of the future demand curve and how this can be moderated by means other than more production using the BAU technologies. This is complicated by the fact that the people who have the much of the info. (generators and supply companies) have strong vested interests. For example some plant will have to come off line due to non compliance with the EEC Large Combustion Plant Directive. This is in fact an opt in or out decision by generators according to whether they fit the necessary flue gas treatment. Once a plant is opted out there is then an artificial forecast capacity drop which is used to promote the idea of an energy gap and hence drive a requirement for new plant. IMO the role for a strong national policy including realistic and viable demand reduction through workable technologies is very clear but unfortunately we do not appear to have the mechanisms to deliver it. The comments above haven’t clarified what the drivers are in the US? – its enjoyable stuff but it seems to go back and forth between “mine is better than yours” and “no its not!” :) ! I get the impression each state controls its own destiny? What is the role of federal gov.? How can the US benefit best from its widespread, varied and abundant resource base? Rod Adams, fully understand and embrace the concept of intermittency as it relates to reliable renewable power. I also write on numerous technologies for grid-scale, and small-scale, energy storage. slowtofollow (13:23:04) : The US still has the enviable legacy of for-profit utilities that made the decisions on what was the cheapest way to generate power. The state regulatory commissions, that set rates based on cost of service, forced utilities to justify their decisions and severely punished them if things went wrong (but never rewarded them if the decisions turned out better than expected). Of course, state governments could not resist directing the huge investments in large power plants, so the process has been gradually corrupted. This freedom within individual states is under severe pressure from power hungry Congressman who would love to have another source of graft. There is tremendous lobbying to force the rest of us to follow CA with renewable mandates, carbon taxes, and more regulatory burdens on disfavored (the ones that don’t donate to your campaign) sources.. Fortunately until now, the federal role has been limited in power generation to matters of national security and interstate impacts like air pollution. Most of the federal energy research money has been a total waste, make-work for universities and institutes that do little more than indoctrinate. The only thing worse than federal research would be international research. The fact is that energy conversion is very straightforward for competent engineers, we don’t need to reinvent thermodynamics and no amount of research is going to discover significant breakthroughs in efficiency. The tremendous improvements in turbine efficiency have come from closely managed material improvements and incremental design improvements and are almost entirely the intellectual property of the turbine manufacturers. The US is blessed (yes, I even believe in some American exceptionalism and a benevolent Creator) with coal, natural gas, and uranium and I think we should use them without imposing significant costs on our neighbors until something cheaper comes along. Peter Moliterno M. Eng Cornell ’81, P. E. Fuelmaker (17:25:21) : .” Peter, WUWT!) Roger Sowell (12:22:08) :: — Thank you for the update. Only one nuclear plant site is under (serious) design progress – South Texas 3 and 4, near Corpus Christi TX. Roger Sowell wrote: None of what I wrote is BS, rather it is TS, where T stands for True. The facts don’t support your case though: Despite the subsidies and Production Tax Credits, the percent of our electricity from Renewable energy peaked in 1997. Since then its gone pretty much steadily downhill both as a percent of generation and also in absolute quantity. 1997 12.4% 1998 11.1% 1999 10.8% 2000 9.4% 2001 7.7% 2002 8.9% 2003 9.1% 2004 8.8% 2005 8.8% 2006 9.5% 2007 8.4% 1997 Renewable 433,636 Million kW hours 2007 Renewable 351,300 Million kW hours In contrast, over the same time frame, Fossil Fuel use grew by 23% and Nuclear grew by 28% Arthur Since then the percent of Renewable Slowtofollow, Just to add to what Rod said about decommissioning. Not only are the UK’s plants more expensive to decommission, but my understanding is that the UK also made the mistake of not requiring the plants to put a small per-kW-hr fee into a trust fund for plant decommissioning. Due to the magical power of long-term compounding interest, over a 40-60 year plant life, US plants have only had to put in ~0.25 cents per kW-hr generated, to fully cover the $0.5-1.0 billion cost of decommissioning a plant. As a result, not only does the taxpayer pay nothing, but there is no question as to what plant decommissioning really costs, on a per kW-hr basis. This allows and easy, informed decision to be made concerning nuclear’s relative economics when deciding whether to build a new plant, with all decommissioning costs fully considered. In the UK, on the other hand, because they put this expense off, the utilities, public, and govt. are getting a sticker shock at the end. The absolute numbers look very large, and it gives everyone the impression of a large subsidy/cost, that makes nuclear appear uncompetitive. The problem is that people don’t appreciate the long-term compounding interest effect, and tend to just add the final decommissioning cost directly, and arrive at a much higher, and false, per kW-hr cost. As for new plants in Britain, not only will they be cheaper to decommission, as Rod points out, but the US policy will be applied, and these plants will have to set aside sufficient contributions to a trust fund to cover all future decommissioning costs. Given this, there is no future liability for the British public. If the power companies decide to build new nuclear plants, it will be with the decommissioning costs fully included. Any new British plants will be LWRs, and we’ve decommissioned several of those in the US already, so we have a very good idea of what it costs. As for the global warming issue, I’ll just say that if we decide to do nothing, and not limit (or put a cost on) CO2 emissions, I agree that coal and gas (coal anyway) will likely remain at least somewhat less expensive than nuclear. As a result, it may be that few nukes would be built. On the other hand, if hard limits are placed on CO2, nuclear will be a large fraction of our future energy supplies, unless there is massive govt. intervention to force us to use all renewables instead (something that we may be beginning to see, unfortunately). As for subsidies in the UK, my understanding is that renewables get a guaranteed price for power that is well above the market price. Nuclear does not get this benefit. In fact, the companies proposing to build new nukes in the UK have all been promising that they would do it with no public subdidy. The only thing they’re relying on is whatever price is placed on CO2 emissions by the EU’s cap-and-trade market. I also believe that the UK govt. looked at the subsidies required to meet their CO2 targets by various means and they concluded that several times the overall subsidy would be required to use wind, vs. using nuclear. @adoucette (21:16:54) : “The facts don’t support your case though: Despite the subsidies and Production Tax Credits, the percent of our electricity from Renewable energy peaked in 1997.” I’m not sure what point you are trying to make, as to which part of my “case” you believe is unsupported. The link you provided is to U.S. figures, where the “Renewables” category includes large hydroelectric. The large hydro amount overwhelms all the others, wood, wind, solar, bio-gas, geothermal, etc. Furthermore, large hydro peaked in 1997, with the subsequent decline due to persistent drought and water management decisions. In stark contrast, the other renewables (excluding large hydro) have either remained fairly stable or grown dramatically. Wind generation in that data shows tremendous growth. As often said about the sun on this site – we live in interesting times. We’ve had the Industrial Revolution. We live during a technological revolution. We’re only just starting on the energy revolution. Regardless of peak oil myth or reality, our production and use of energy in the future will and must change (and not for the sake of the climate). Saying it need or should not is a bit like looking at 30 years of weather and calling it climate change. We have cheap and plentiful, centralised sources of power, but we’ve built ‘more of the same’ until we have problems thinking any differently. Energy in the future will be produced and supplied differently, although this change will hopefully be invisible to the consumer. Grid issues (access, control), storage, load balancing, distributed generation – these are all huge research topics at present for all concerned. You guys have cheap energy and you are complaining. As a domestic user I pay ~$0.22/kWh; industry in my region struggles with energy cost as we import >97% of our energy. Investing now in renewables is costly, but is seen as a way to insulate against external control of cost and supply. I hate subsidies and market incentives, but perhaps for some of us they are a necessary evil…..? Roger, You’ve posted again and again about how you believe that renewables can/will be a large part of our energy future, yet the energy industry doesn’t seem to agree with you. You blaim the decline on the drought as if that excuse matters. That’s at the heart of the problem with renewables. You say that water can be pumped uphill but the figures show that pumped storage consumed an insignifant 0.17% of our electricity last year and thus produced far less. Remaining stable means no growth. In contrast though, over the period from 1997 to 2007, electricity production grew by 19% Yes, Wind has grown by a large percentage, but in absolute terms it is still just a minor player: Percent of electric generation from Wind: 1997 0.09% 1998 0.08% 1999 0.12% 2000 0.15% 2001 0.18% 2002 0.27% 2003 0.29% 2004 0.36% 2005 0.44% 2006 0.65% 2007 0.77% To put it all in perspective, wind power is such a small amount of our generation that the overall drop in renewables in this period is almost 3 times the amount of wind power we generated last year. Arthur @Peter Moliterno and Robert Cook (the two PE’s in the discussion) – If you read Roger Sowell’s comments carefully, you will recognize that he is a strong supporter of natural gas, including imported LNG. He dismisses the value of atomic fission – ostensibly because it is politically unacceptable, or because it takes too long to build a plant, or because the plants MIGHT be damaged if someone flies a large, commercial aircraft full of fuel directly into the containment building. (That begs an interesting question – what would the effect be of a similar planned collision between a commercial airplane and an LNG tanker or reception terminal near a population center like Boston?) I suspect Roger’s advocacy is based on the fact that he and his clients do not like the competition that uranium fission provides. If fission was allowed to compete on anything close to a level playing field, the supply of energy would be virtually unlimited. The shift in the balance between supply and demand would move more in favor of the consumer and reduce the selling price of natural gas. (In fact, one of the big reasons that utilities stopped building nuclear plants in the 1970s was a fear of “overcapacity”. That should never concern consumers; over capacity leads to lower prices!) Base on my analysis of the market, the reduction in natural gas prices (and those of competitive fossil fuels) would be dramatic, especially if there is any cost associated with atmospheric waste dumping of products like NOx, SOx, particles, or CO2. (Have you ever noticed how fossil prices were low for 15 years after the fission plant construction completions of the 1970s and early 1980s?) If understand Roger’s vested economic interests, you need to look no further than his own web site. Here is a quote: “He advises, represents, and defends operating companies in civil matters related to process safety, environmental and other regulatory matters. As an attorney who understands engineers, he also works with other attorneys on matters where expert witnesses and technical lay witnesses are involved, such as deposing and cross-examining expert witnesses. Before opening his law office, he worked for 20 years in more than 75 refineries and petrochemical plants in a dozen countries on four continents. His engineering work covered a wide array, including design, process and project engineering, consulting studies for profit improvement, construction supervision and startup, maintenance, advanced process control, refinery process simulation and optimization, environmental compliance, and hazardous operations analysis and response, to name but a few.” My understanding of the situation is that Roger, like many people who argue for and even “embrace” (his word, not mine) intermittent power generation sources, is really in the business of selling fossil fuel. They KNOW that the diffuse, intermittent, politically popular “renewable” sources are expensive and that people cannot afford to pay what it would cost to enable those sources to expand very much. They also know that if their efforts helps to successfully delay construction of atomic fission power plants, they will retain their existing 85-95% market share until such time as the fission plant construction gets completed and the plants start operating. That puts big bucks into their pockets; the market for fossil fuels in the US is a roughly $1.5-2.0 trillion dollar per year enterprise. As the man in the movie said: “Follow the money.” As Roger has indicated above, his lifestyle is pretty pleasant: , too, have had some extensive sailing experience, it has mostly been on boats owned by the US taxpayers. I was assigned on several occasions during my career to teach midshipmen how to be competent mariners. I am not poverty stricken, but I have always considered that boats are holes in the water into which you pour money. As a professional naval officer and part time entrepreneur trying to implement a valuable, but politically unpopular energy solution, I have never had much disposable cash. Rod Adams Publisher, Atomic Insights Host and producer, The Atomic Show Podcast The question of airplane impacts on nuclear power containment structures was answered definitively at Sandia National Laboratories in 1988. The lab tested the impact of an F-4 on a piece of steel-reinforced concrete meeting nuclear specifications. The live test was commissioned by Japan, I believe Tepco. There are photos and videos of the test that can be found on the net. This was a more serious test than that of an air liner. The only structures with any penetrating power are the engine blocks, and that’s essentially what a military fighter is, a flying engine block. At the moment of impact, the F-4 was just under Mach 1. Result: the aircraft was in very tiny pieces, and achieved less than one inch of penetration. adoucette (06:00:19) : Well, you are very good at nay-saying and sniping from the sidelines. There was a time in the U.S. when power was provided by animals, and coal-burning to produce steam, and water-wheels on small dams. If any statistics were kept (and I doubt that), one could look it up.? @Rod Adams You make many (wrong) assumptions and statements about me, my motivations, my lifestyle, and my livelihood. One can only wonder why such words would flow forth, what deep wellsprings of envy or hate exist? You are apparently envious that I spent some time on boats, and you (apparently) jump to the conclusion that I live the life of the idle rich, a la Gatsby, yachting about to while away the endless hours. Hah! If you only knew. The fact is that my colleagues and I began serious research and development of alternative energy (back then we did not use the term “renewable”) in 1975, and our motivation was a desire to prevent escalating fossil fuel prices from raising electricity and transportation prices to unbearable levels. You may not have heard about it, but there was a serious energy shock in 1973, and again in 1979. You could look it up, though. You might also read my blogs for the subject “death spiral,” to get some perspective on my motivation. Your linking the building of nuclear power plants, and the subsequent decline in fossil fuel prices is, well, laughable. Are you suggesting that OPEC was so scared of nuclear power plants (in an industry that does not burn oil, so does not compete with OPEC’s product), that OPEC dropped their prices? Nuclear power scares the crap out of a lot of people for very good reasons, but I have never heard that one before! LOL!!!!! You say that you believe my advocacy is because I do not like the competition from nuclear. Zero points for that one, as you missed the mark entirely. You might have better success if you find a coal-industry supporter at whom you can fire those remarks. Coal is not and has not ever been an industry I worked in or for. In case you missed it, I worked in refineries and petrochemical plants. Not much coal burned in those. As I wrote on the nuclear fusion thread a few days ago, I am all for it. You bring me a nuclear power plant that is not by legal definition ultrahazardous, does not produce nuclear bomb material, does not create toxic radioactive wastes that endure for centuries, and produces power more cheaply than natural gas, and I’ll support it. That’s four criterion, and so far, nuclear fission is batting 0 for 4. You, on the other hand, want to see a world with literally thousands upon thousands of tiny nuclear fission power plants, near schools, playgrounds, shopping malls, or in neighborhoods where their radioactive leaks, malfunctions due to sabotage, and runaway reactions can and will poison millions of people. What a nightmare. >>Yet when I mentioned thr japanese idea for getting >>uranium from seawater using a polymer you said no problem! Never said any such thing! . >>While I agree with much of your point vis the UK; your >>statement is just not true for the rest of the planet. If you think that the UK will be safe and secure while Colonel Gadafi supplies us with Solar power, you should think again. . Regards nuclear reactor cores, if Chernobyl had had one of these, there would not have been this Great Green Outcry against nuclear power. . Couple of quick follow ups: In the UK the Non Fossil Fuel Obligation early rounds did pay a premium to nuclear. I think this went with the change to the Renewables Obligation mechanism. Re: CCGT and NOx emissions – there was a comment above on this suggesting it is a big problem? I think with modern plant this is not the case but would welcome some figures. and Ralph – good clip! thanks. Suffice it to say that I don’t share Roger’s optimism about long-term future natural gas prices. I remember when the EIA, with a straight face, predicted long-term (average) gas prices of $3-$4/MBTU over the next few decades. Since then, we saw prices of $10 to $15 under normal economic conditions. And now, we have a (temporary) glut of gas due to new shale deposits going into production, which is occurring simultaneously with a worldwide economic depression. And it has become clear that gas and oil prices are very sensitive to demand swings. So, with these two huge (temporary) factors that act strongly to reduce gas prices occurring simultaneously, do you know how low the price of gas is? About $3-$4, exactly what the EIA said it would AVERAGE, over normal, long-term conditions. The moment the world’s economy recovers, the price of gas will be over $10/MBTU. You can count on it. And at this gas price, nuclear can easily compete for baseload generation. This will certainly be true if they pass climate change legislation that has hard, legally-binding, and declining caps on CO2 production. The immediate effect this will have will be a shift from coal to gas. Any notion that gas could take over a large fraction of coal’s generation w/o the price of gas shooting into the stratosphere is simply fatuous. The reality is that some fraction (perhaps most) of the replaced coal capacity will have to be taken up by nuclear or renewables; enough to prevent dramatic escalation of gas costs. In other words, the use of gas (to replace coal) will rise until the price of gas reaches a point where nuclear or renewables are competitive. How much will be renewable and how much will be nuclear? It depends. If it’s decided by govt. fiat, it will be mostly renewables. If it’s left to the market, it will be mostly nuclear. It’s interesting how Roger talks about cost comparisons between nuclear and gas, and then goes on to talk about things like off-shore wind, but when doing so, suddenly stops talking about cost comparisons. The fact is that, with the possible exception of onshore wind, all renewable sources are much more expensive then nuclear. And on-shore wind’s cometitive costs do not include things like required backup (fossil) power and grid upgrades. Renewables are also much more subsidized then nuclear, as I pointed out earlier. And then of course, there are the absolute mandates (i.e., infinite subsidies), in case the exising large subsidies are not enough. As for fossil fuels (including gas), yes we know that they are cheaper than nuclear as long as they retain the right to pollute the atmosphere for free, and their (huge) external costs are not counted. As I say above, gas plants may be cheaper than nuclear now, but in the future, after economic recovery, they won’t be. If and when we have hard CO2 limits, they definitely won’t be. And yes, I’m willing to put that to a fair economic test. Just cap CO2, forget subsidies, and see what happens. Finally, concerning some things Roger says about safety, I have to laugh when I hear about nuclear being (“legally”) ultra-hazardous (when gas, or oil, is not, apparently). We have 40+ years of operating history in the Western world which decicively shows that nuclear’s risks are negligible, and orders of magnitude smaller than those associated with any fossil fuel, coal, oil, or gas. While fossil plants kill 25,000 Americans every single year, US nuclear power plants have never killed a member of the public, or had any measurable impact on public health, over their entire history. I care about the actual facts (like these) and couldn’t care less about “legal” definitions. These laws were almost certainly put into place for the reasons Rod alludes to in his posts. Powerful (fossil) interests have no interest in having a level playing field between nuclear and fossil (i.e., equal risk and/or environmental impact allowed for each) because they know that they would lose decicively. As for what the facts actually show, even renewables (e.g., wind) are less safe than nuclear. The wind industry has a very poor industrial (OSHA) safety record, and things like solar roofs fall under the home construction industry, one of the most dangerous industries in America, OSHA statistics say. BTW, the nuclear industry has one of the lowest accident rates, with OSHA statistics saying that it is as safe as office work. But hey, these risks take the form of a slow, steady stream of “ordinary” (read: acceptable) deaths from things like falls, so nobody will ever care (unlike with anything nuclear). The relative safety of the nuclear and wind insutries is explored in this article: It is equally clear that nuclear is much safer, and environmentally sound, than natural gas. Gas emits CO2, nuclear doesn’t. Gas pollutes the air (pollution which cases a steady stream of annual deaths), nuclear doesn’t. And finally, with respect to accidents, people are killed by various explosions, conflagrations, and poisonings (carbon monoxide) related to the use of gas that we don’t even hear about it anymore, because it’s not even considered news. No member of the public has ever been killed by nuclear-related accident. But hey, nuclear is “legally” hazardous. And finally, with respect to facility accidents or attacks, both oil refineries and LNG terminals, as well as chemical plants, are infinitely more dangerous than nuclear power plants. Not only are accidents, or successful attacks, much more likely to happen, but the maximum consequences of those attacks are also far larger. Both the NRC and EPRI (the Electric Power Research Institute) agree that flying a large commercial jet into any of our existing nuclear plants has a negligible chance of causing a significant release. Such an attack on an oil refinery or LNG terminal would have a much greater chance of succeeding, and the results would actually be far more severe. But hey, they’re held to a far different (lower) set of standards. They have very good lawyers, and politicians. One final point, the oil and gas industry is (quitely) behind a lot of the renewables push, espeically wind. This is because only oil and gas power plants can rapidly change their output levels, in reponse to the erratic ups and downs in wind generation. Coal and nuclear plants have a much harder time doing this. Also, because wind generates more power at night (when we least need it), it cuts into the baseload demand that nukes and coal would otherwise serve, and increases peak demand, that is generally served by gas plants. The reason they like wind is that it acts to supress coal and nuclear development, and increase the share of generation that has to be provided by gas (or oil). This provides more assurance of demand (and high price) for their product. We’ll be dependent on the Middle East for our electricity, as well as our transportation, but what they hey! @Roger – I was not at all implying that you live the life of the idle rich. I am sure that you work very hard for your income. However, I know a bit more than you think about the energy shortages of the 1970s. For example, I know that as late as 1978 oil held a 17% of the US electricity market with most of the oil burning power plants being located in places like my home state of Florida and in the Northeast. Not surprisingly, those were some of the places where nuclear power plants were purchase and completed and helped push oil out of the electricity market. There were other places around the world where oil was a significant contributor to electrical power generation – the UK, France, Japan, Taiwan, Korea, the Netherlands. As nuclear generation grew faster than the overall market, oil lost market share. Though many fission opponents like to claim that nuclear does not compete against oil, that is only in electrical power markets because that battle is already over and oil lost. The loss of customers in the electrical power market was quite important for refiners since the oil that got burned in power plants was called residual oil; it was a high carbon portion of the input crude and not suitable for use in internal combustion engines outside of very large (10s of MW) diesels. To overcome the effect of having a glut of this particular product refineries worked hard on cracking technology that adds H2 to make lighter hydrocarbons out of the heavy residuals. I am sure that you were involved in the technical end of that process, though you might have been unaware of some of the market pressures that were pressing it forward. The competition from nuclear is also quite important to natural gas marketers – every time a nuclear plant has an unscheduled shutdown gas gets burned to replace the lost power output. One of the reasons that LNG prices have been elevated for the past 20 months is that a very large complex in Japan – Kashiwazaki-Kariwa with 7 reactors totaling 8.2 GW of capacity- has been shut down since July 2007 to conduct inspections following a major earthquake. The effect on the earnings for the plant owner are well documented, but every time there is an increased cost for someone, there is a corresponding increase in revenue for someone else. In this case, the gas/oil industry was the beneficiary. There is another major place where nuclear fission technology has affected the market share for refined oil and that is in naval vessels. During the Vietnam era, the US Navy was the oil industry’s single largest customer. Aircraft carriers are prodigious consumers of oil products, but our ten nuclear aircraft carriers use far less than the oil burners that we had back then. Submarines are a much smaller effect, but there are several navies (US, UK, France, China, Russia) that have submarine fleets that consume uranium instead of diesel fuel. According to Shell’s annual energy outlook, nuclear plants around the world produce about 12 million barrels of oil per day equivalent (not including any military power reactors). That is 30-40% more than Saudi Arabia. Can you honestly tell me that the introduction of a brand new energy source larger than the world’s largest oil producer had no effect on the price of oil, gas and coal? Can you honestly tell me that the fossil fuel industry, which understands that there is and always has been a certain amount of fungibility among heat producing fuel sources, was not concerned enough about the growth of that competitor to work hard to slow it down? Please. Rod Adams Publisher, Atomic Insights Host and producer, The Atomic Show Podcast The author is right about wind. For those interested in outside reading, go to energy tribune. They offer excellent analysis. Consider this article: Hydro, geothermal, and biomass are too small to matter and geographically restricted. Wind, solar, wave & tidal are too intermittent and diffuse. Without some super battery, they will never take off. Consider this: one gallon of gasoline contains as much energy as one ton of lead-acid storage batteries. As for nuclear, I disagree that it is a “stop-gap.” It will last longer than fusion, since fusion requires tritium, which must be bred from precious lithium resources. At $10 a pound uranium is finite, but at $200 a pound it can be extracted from seawater using a fabric adsorbent. Generation IV reactors are 100 times as fuel efficient like the fast breeders the author mentions, but without the proliferations threat. @Rod Adams. (17:23:11) : Nice analysis, even if misguided and very wrong. Once again, you seek conspiracies where none exist. Let me give you a clue word, “COAL.” It wasn’t the oil industry that was concerned about nuclear power, and the reasoning you gave above had very little, if anything, to do with refineries installing heavy oil conversion processes. And, by the way, very little hydrogen was or is added to the heavy fuel oil. A nuclear advocate’s mistake, that, so it is to be excused. Only one of many facts is that light crude oil was in short supply, but heavy crude oil was in greater supply. To simplify a complex explanation, heavy crude oils produce more heavy fuel oil than does light crude oil. Refiners saw that the excess of heavy fuel oil would depress the market for that product. Various processes were built to make gasoline and other products from the heavy fuel oil. The net effect was more gasoline and diesel fuel from a barrel of crude oil, and the demand for crude oil thus “went down” relative to what it would have been otherwise. The reduced demand kept world crude oil prices and product prices somewhat lower than they otherwise would have been. Another of the many facts that led to refiners converting heavy fuel oil to lighter products was the push by environmentalists against “acid rain.” Burning heavy fuel oil in power plants was seen as a contributor to acid rain, but refiners concluded it was not economic to desulfurize the heavy fuel oil. Still another factor is that heavy fuel oil was never a major product in terms of volume or revenue for refiners. So, your claim that nuclear power took away the market is ridiculous. So no, it was not the oil industry that lost market share to nuclear power, it was the coal industry. Shell’s energy outlook, which I have not looked at for many years, may well have reported what you wrote. Shell has trouble doing their accounting, as is well-known but this is not the time to go into that. But the fuel displaced by nuclear power was not oil, it was coal. Shell knows this, but as they are an oil company, it is easier to state things in terms of oil-equivalence. We did not have to “work hard to slow it down,” as there were, and still are, plenty of thoughtful people who appreciate the dangers of nuclear fission and properly oppose it using the laws of the land. By the way, oil refineries also have similar opposition, as does almost any industrial facility in these times. What is so special about nuclear power, that it should receive a pass? To use your word, Please. The nuclear power industry has managed to escalate the initial capital costs to the point where it now costs around $10,000 per kW to construct, and that is not in California, and at least six years. The costs per kW keep escalating as the years go by, and one must wonder why is that? Classical economics holds that the unit cost of a technology will decline over time, as experience is gained and efficiencies are discovered. Such declining unit costs are occurring for wind and solar and bio-gas, but not for nuclear. And, for those who believe that a fee for CO2, or a cap on carbon, will bring nuclear to the forefront of technologies, you might want to reconsider. The current buzz-phrase is cradle-to-grave, and on that basis nuclear power produces prodigious amounts of CO2 for fuel production and plant construction, and decommissioning plus waste fuel disposal. Last, it is interesting that you noted that nuclear power plants in Japan were taken off-line after an earthquake. Can you please explain how the natural-gas-fired power plants managed to withstand the earthquake, and were ready for service upon demand? This has been great fun, Mr. Adams. If there are any other misconceptions you want to clear up, they will have to wait for quite a while. Other things require my attention for the next few weeks. slowtofollow (16:27:46) : NOx emissions from modern gas fired turbines are insignificant unless the command and control people use the standards to obstruct and play gotcha over recordkeeping, monitoring and testing, which they love to do. Last time I followed it, about ten years ago, new source performance standards had dropped into single digit ppm. I believe that was corrected to 15% oxygen, so for a million pounds of gas (24,000 MMBTU), NOx emissions (as NO2), NOx emissions would be less than 500 pounds, which at a rate of a $1/pound to buy NOx credits, would be about $.02/MMBTU compared to prices of $3-$10 for the gas itself. Sorry for all the English units. For comparison sake, NOx emissions/BTU from the most efficient spark ignition or compression ignition engines is about 100 times greater. Rod: Would you consider writing a guest post about the hazards of modern nuclear plant designs? My understanding is that although there is a remote possibility of a total property loss at plant that has multiple failures, there is no possibility of a runaway nuclear explosion and the maximum release of radionuclides is also quite limited. I think if people understood that the risks did not include a nuclear explosion, it really might make a difference. I also think that if put in terms of dosage, which could be compared to the differences in natural background and elevation, people could be made more comfortable with the risks of a worst case release. Like you said, there are hazards from LNG storage sabotage, and I would add coal sludge pond collapses and underground fires. Roger: That was a cheap shot against Rod about very little H2 being added to heavy oil. It is beside the point but for the benefit of other readers, H2 is primarily added to remove sulfur and other contaminants. Although I agree that refiners certainly didn’t fight very hard to continue selling expensive residual fuel to power plants that decided to repower with gas, it certainly did free up some feedstock to be upgraded. The markets for fuels are so deep that the conspiracy theories just don’t make much sense, except when laws are written to put competitors out of business, which is my complaint about CA policy. I would hate for anyone to think that I wanted to raise the cost of oil or propane heat with carbon taxes so that I could charge more for my wood fuel alternative. And I couldn’t possibly supply very much of the resulting demand and would just disappoint my prospective customers. New England desperately needs more natural gas capacity and would greatly benefit from a few more nukes, but I guess that will take several years of $100+ oil before the political will develops to shut down the NIMBY’s and BANANA’s (build absolutely nothing anywhere near anybody) @Roger and Fuelmaker – Both of you use the wrong word. It is not a conspiracy when individuals and groups that have a product to market work to put a negative face on a competitive product. It is sometimes a coordinated strategy, but usually it is simply normal business practice. Steel makers talk about the negatives of plastic bumpers, plastic bottle makers convince people that steel cans leave a funny aftertaste, corn growers emphasize the dangers of imported oil, railroads talk about the way that individual trucks cause traffic jams, and coal guys talk bad about nuclear power. Roger – approximately half of ExxonMobil and Chevron’s energy production comes in the form of natural gas. Gas and nuclear constantly compete for market share. The nuclear plant shutdown that I have mentioned has been elongated by an absurd amount of caution. There was essentially no reason for continued shutdown found during the extensive investigations, but the regulatory bodies forced the investigations to occur. It is as if you were forced to do a ground up crawl through of every inch of your house before reoccupying it after a quake. The gas plants might have experienced damage and piping misalignments that have resulted in leaks or potential future failures, but no one knows because no one was forced to check. Finally, here is a link to a company that supplies industrial H2 for oil production. Seems to me that 12.4 billion SCF per day is a rather substantial quantity, but perhaps it is not so large from your perspective as a refinery expert. The company that supplies this market lists several reasons that H2 is used in oil production. Here is one of them: “Increased consumption of low quality ‘heavy’ crude oil, which requires more hydrogen to refine.” It has been fun. Look forward to sparring with you again sometime. I have no illusions of my ability to change your mind; what is more interesting to me is the effect that each of our arguments have on those observers who are watching and reading. Rod Adams Publisher, Atomic Insights Host and producer, The Atomic Show Podcast Out of the blue yonder ….. World Nuclear News has 2 items . Nuclear power back on UN climate change agenda. The first drafts of texts to be the basis of negotiations on a future United Nations climate change agreement in Copenhagen in December have brought back the issue of nuclear energy into the discussions. A key element for the new agreement to supercede the first phase of the Kyoto Protocol will be the future of the Clean Development Mechanism (CDM) and Joint Implementation (JI), which both give emissions credits to projects which help reduce greenhouse gas emissions. Under rules agreed in 2001, nuclear projects cannot be counted towards credits. The draft text suggests that nuclear projects being granted full eligibility as CDM and JI projects is the best of four possibilities. WNN 21/5/09. US economic report updated. An update of the 2003 MIT study on the economics of nuclear power has been published, with a primary focus on the USA. The report. WNN 21/5/09. Fuelmaker – thanks re:NOx. That was my recollection from a plant visit where I had a figure of about 2.5ppm in memory but I wasn’t sure. All – re: power options – informative and enjoyable stuff. FWIW my view is that the range of possibilities (and I strongly advocate less waste through cogen, trigen, improved efficiency etc etc) means that developed economies have no need to fear energy security etc. IMO open and informed debate from that perspective has a lot to offer. Specific local conditions, resources and requirements should feature strongly in debate not rigid idealogical positions. Re: lobbying, conspiracy and BAU – as an observer, unless one makes it a full time job getting to the bottom of the detail, one never knows where the balance is here. I’d say however look at the auto industry – GM filing for bankruptcy protection says a lot IMO. I can recall all the cries over the years about the impossibiity of change to better fuel efficiency standards, new technology etc etc and the awful implications these would have for economies jobs etc etc. From where I am its hard to see how things could have been worse! Best wishes to all The assumption that government restrictions on CO2 emissions and government requirements for CO2 ‘sequestration’ would create a ‘level playing field’ for the free market to operate in seems crazy to me. First off, there is no need to restrict CO2 generation (a beneficial trace gas, essential for all life on Earth), so the attempt to do so is fundamentally wrong-headed and must be stopped. Second, CO2 sequestration from all I’ve read would be a fabulously expensive and fruitless exercise. Conceivably you could sell off the resulting CO2 for industrial processes or soft drinks, or something, but there’s no shortage of it now. So how does government burdening the coal and natural-gas power industry with insane costs create a ‘level playing-field’? From all the discussion above, it appears to me that even absent such a leg up as you propose, nuclear power can compete quite well with coal and natural gas, and should have a major place in the energy mix. /Mr Lynn Roger wrote: “There was a time in the U.S. when power was provided by animals, and coal-burning to produce steam, and water-wheels on small dams. If any statistics were kept (and I doubt that), one could look it up. ” Actually its fairly well documented. In the US Fuel/hydro energy first exceeded animal energy in ~1870. In that year animals produced ~ 8.4 Billion Horse Power Hours and inanimate sources produced ~ 8.5 Billion. By 1920 animal sources had almost doubled to 15.2 Billion HPHs but inanimate sources were ~ 268 Billion HPHs. Still, I don’t find that this is very instructional. I prefer to look at trends over the last several decades, not centuries. In that context none of the renewable technologies are new and each has gone through several itterations of major improvements such that the technology is now quite mature, meaning advances in the price/performance of all of them tend to be in small increments.?” Well talk about setting the bar really low. Wind produced ~0.8% in 2007 and the percent grew in 08. But wind did not double the amount of electricity it generated. (you are confusing yearly installed capacity with generation) For the last five years (03-07) the average annual increase in generation was 26%. So let’s consider. The average growth in electrical generation for the last 10 years was 1.8%. Let’s assume that, because of the economy, that the growth over the next 10 years will be somewhat less, so lets use a conservative growth rate of 1% per year. Lets also assume optimistially that wind continues to grow in generation by 26% per year and capacity figures remain the same. Now year to year growth at this high of a percent becomes very difficult very quickly, and as we increase the number of turbines the wind fields will begin to decline in quality, so expecting capacity to remain the same is also optimistic. Still, using those assumptions: By the end of 2014 We will have installed four times as many new Wind Turbines (capacity) as we have installed in 2007. To keep up with the 26% growth we will need to install in 2014 alone, the same number of wind turbines that we have installed today. And with all that we will be producing only 3.6% of our electricity from wind. I’d continue but at this level of compounding the installation rates just become silly. Arthur “”” Arthur (08:42:19) : Roger wrote: “There was a time in the U.S. when power was provided by animals, and coal-burning to produce steam, and water-wheels on small dams. If any statistics were kept (and I doubt that), one could look it up. ”. “”” Where do you get the idea that the production of PEV solar power is a manufacturing problem ? It’s a technology problem; mass available solar cells are just not very efficient. One of the largest module suppliers claims their panels are 18% efficient. They don’t quote any operating conditions or output power levels to support that; and they don’t say anything about the efficiency of the required AC inverter systems. (In)Efficiency translates into surface area, and the cost of covering surface area with any inanimate structure isn’t going down any time soon; and it won’t be going down just because some of the construction material is silicon. I know there are people who claim they can just paint on the solar cell material onto steel plate of somesuch. Well that may be true, but it also isn’t any 18% efficiency either. Did I mention that solar energy comes to us at an average rate of 168 Watts/m^2; and that is under ideal conditions. Yes it is good on your roof; so long as your friends and neighbors don’t mind helping you pay the cost through their tax dollars; which incidently are earned mostly by the consumption of fossil fuels. I’d like a dollar for everytime somebody says that the cost is just a manufacturing problem and we will solve that when we get into full production. Not exactly, you will get into full production when you get the energy cost of the process down so that there is actually an energy process. Lower costs bring higher production; not the other way round. George Make that “so there is actually an energy gain.” George wrote: ” Where do you get the idea that the production of PEV solar power is a manufacturing problem ?” From one of many articles like this: But I’m also a realist, so that’s why I said its possible, not that its probable. I’ve read a steady stream of PV “breakthroughs” for the last decade or longer and none have really generated the promised reductions in cost per watt. Where PV is concerned, it appears a real challenge to bring lab results to the end market. Arthur George, Another article on the Thin Film PV market. Arthur Mr Lynn, Well, I would argue that nuclear has already been burdened with “insane costs”, in order to avoid negligible risks. I also don’t share your optimism that nuclear could be competive as things are currently regulated (i.e., a hopelessly unlevel playing field). This is one weakness of any “leave it to the market” argument. The very concept of which source is cheaper is actually not meaningful. The correct description is which source is cheaper, as currently regulated. The level of regulation that each source is held to has such a strong effect on price that one cannot talk of “fundamental” cost differences between sources, without given regulations full and careful consideration. To me it is incredibly clear that nuclear is held to infinitely higher standards, and is required to spend thousands of times as much money per unit of risk/damage avoided. In response to an event which emitted no pollution and killed nobody (TMI), the govt. (NRC) put into place regulations that literally doubled the cost of nuclear power, without so much as a congressional debate. Compare this to the wailing and hand-wringing over CO2 limits, that would have a similar impact on coal. All this spending is to prevent even a miniscule chance of a reactor emitting any pollution. By contrast, coal plants cause 25,000 American deaths every single year (whereas nuclear plants have had no measurable health impact, ever). Most believe that even a worst case meltdown even (the thing that all that money is going to prevent) would inflict less than a tenth of the damage caused ANNUALLY by coal plants. Most of this pollution is caused by some very old coal plants that were allowed to operate, even to this day, even though they don’t meet the requirements of the (1970!!) Clean Air Act. Can you imagine if a reactor ever emitted pollution that would have that kind of effect? The pollution at these plants could be mostly eliminated at a cost of only a fraction of a cent per kW-hr, but they still won’t require it! Oh, and their toxic waste stream (i.e., fly ash that contains mercury, arsenic, uranium, etc…)?? They got it classified as non-hazardous!! (In a disturbing way, you almost have to be impressed by these guys.) Meanwhile, nuclear costs several cents more, just to avoid risks that are several orders of magnitude smaller. What would a level playing field be if we ignore global warming? Well, nukes are not allowed to emit any pollution at all. But it goes way beyond that. They have to ensure that even the chance of emitting pollution is negligibly small. They also have to ensure that their wastes do not ever have any impact, for as long as they remain hazardous. What would the equivalent be for coal? Simple. Zero allowable emissions of SOx, NOx, mercury, particulates, radioactive isotopes (uranium), etc.., etc.. Basically, all of their wastes/toxins would have to be fully contained, and they would have to guarantee their containment for as long as they remain hazardous (much longer than nuclear waste, actually). You can vent the CO2, though….. This, of course, is a standard that coal could never meet, at almost any cost. One beneficial side effect of CO2 limits, even if you don’t believe in AGW, is that it will end up making coal contain all those other toxins, while they are containing the CO2. When I talk about a level playing field, one of the main things I’m referring to is that external (i.e., public health and environmental) costs be fully counted. Fossil fuels have been getting away scott free on this for decades, and it’s time it stopped. Roger wrote: “The current buzz-phrase is cradle-to-grave, and on that basis nuclear power produces prodigious amounts of CO2 for fuel production and plant construction, and decommissioning plus waste fuel disposal.” The issue of net CO2 emissions from various energy sources, including nuclear, has been studied. Roger also wrote: “Can you please explain how the natural-gas-fired power plants managed to withstand the earthquake, and were ready for service upon demand?” This was purely a political decision, based on the deeply ingrained notion that zero risks from nuclear are acceptable, while thousands of routine deaths from other sources, like oil/gas-fired power plants, are perfectly OK. The nuclear plant in question survived the earthquake without releasing any radioactivity, and without suffering any detectable significant damage. Despite this, they decided to close the plant for an extended period, in order to do intense inspections (to further prove that there was no damage), and to install earthquake upgrades (despite the fact that the plant wasn’t damaged by the earthquake). Roger suggests that the gas plants were less damaged, or were somehow technically capable of resuming operation whereas the nuclear plant was not. There is no truth to either of these assertions. The only difference is in how nuclear is (politically) treated. It was all pure BS. Japan made a concious decision to run a raft of very old, dirty fossil-fired power plants, in place of a perfectly fine, non-polluting nuclear plant, for over a year, thereby emitting an enormous amount of CO2, and consigning hundreds, if not thousands, of Japanese to their deaths (from the pollution). They also sent huge amounts of money to Middle Eastern regimes; always helpful in terms of geopolitics and security. I consider it to be a shameful decision. Rod, I am very happy to see you were able to post here. I have learned a lot from your podcasts and hope others here will tune in:. I also highly recommend John Wheeler’s podcast, This Week in Nuclear: (John is an engineer who has worked in the nuclear energy industry for more than 20 years). Note to WUWT: please consider asking Rod and/or John to write a guest post on nuclear power. It is an essential and central issue in the AGW debate. Thanks for a great site! The two highlighted sentences are contradictory. If coal could never meet a CO2-exempt zero-emission standard, how could it meet a zero-emission standard if CO2 (the chief product of carbon combustion) were included? Basically you are saying that we should adopt regulations that prevent us from burning coal at all. That would do more than “level the playing field” for nuclear; it would remove the only significant competition. While it is certainly arguable that nuclear power has been severely over-regulated, to the point of strangulation, and that those regulations ought to be rationalized in light of both experience and advances in technology, it is not true that coal-fired power operates free and clear, at least in the USA. I’m sure there are experts here who can testify to many mandated improvements in emissions and residue management over the last few decades. One has to wonder, too, at the basis for the oft-cited figure of “25,000 deaths” caused by fossil-fuel burning. My guess is that it includes a very large measure of speculation and hyperbole. It is certainly reasonable to “level the playing field” by drafting sensible regulations for both nuclear and coal. But having practically killed nuclear power by over-regulation, it is not reasonable to kill coal in the same way. That’s what giving in to anti-CO2 alarmism would do. /Mr Lynn Mr. Lynn, Where is it written that if you have a level, or fair, playing field, no contestant can (completely) lose? I thought that’s what the term “playing field” means (it’s possible to lose). I did say that, in my opinion, if coal had to meet the same standards as nuclear, it would cease to exist. It would certainly be more expensive. There is nothing contradictory in that. The 25,000 deaths figure is the EPA’s offical position. The article below provides several links (at the right): I believe that if nuclear’s regulations were lowered to match the allowable risk/impact levels given to coal (something I doubt is even possible), it would be cheaper than coal. However, I know that this is not politically possible. The only thing possible is to raise the bar for coal. I do support that purely out of spite, however. Coal IS under-regulated right now, to an enormous degree. Most scientific studies that put the external (public health and environmental) costs of various energy sources in economic terms (such as) conclude that whereas the external costs of nuclear and renewables are very small, the external costs of coal are enough to more than double its price (4-8 cents/kW-hr). This is real damage, as opposed to costs associated with complying with pointless over-regulation. Thus, the fact is that right now, if all costs are accounted for, coal is significantly more expensive than nuclear. The good news is that the 4-8 cents worth of damage can be removed by spending less than 4-8 cents on pollution controls (this is a result of the fact that coal is not optimally regulated). Coal should have to pay for its external costs and/or meet stricter pollution standards. If it ends up more expensive than alternatives, it simply should not be used. It’s not written in stone anywhere that we must continue to use coal, or any other energy source. There is nothing unfair in all of this. The fact that coal was under-regulated all these years, and did not have to account for its external costs, IS unfair. Another thing that is unfair is what coal-state politicians are doing right now to evicerate the climate bill. They’ve added offset loopholes to prevent coal utilities from actually having to reduce emissions. On top of that, they put in language which states that if emissions do have to be reduced, subsidies will be put in place to ensure that coal with sequestration is used, as opposed to alternatives like nuclear, no matter how much more expensive it is. Renewables, of course, get the requirement that we use it for 15% of our power, no matter how expensive it is (along with enormous subsidies). Nuclear, meanwhile, gets nothing. THAT is unfair. All non-emitting sources should be able to compete fairly, on price, on a level playing field. Re: coal and its impact – just saw this item courtesy of The Blackboard: Thanx for posting that link, slowtofollow. There is nothing wrong with coal power. Stack scrubbers assure that only CO2 is emitted, and as the technically aware folks who read this blog know, CO2 is beneficial, not harmful. And of course, events show that CO2 does not cause global warming. It’s interesting to see proof that Obama is is only interested in aggrandizing his power, and that he had his fingers crossed behind his back when he said he was going to bankrupt the coal industry. The Earth Policy Institute is one of Lester Brown’s organizations; Brown is a charter member of the global everything-is-running-out-and-mankind-is-despoiling-the-Earth Alarmist cabal, so one should take anything he publishes with many grains of (abundant) salt. I don’t think anyone would disagree with this. My complaint was that you equated “a level playing field” with requiring zero emissions from burning coal, i.e. not even CO2. This may be the aim of the neo-Luddites who would take us back to the Stone Age, but it is patently absurd: not only would it price coal completely out of the market, but CO2 is not a pollutant, and riding the anti-CO2 train in order to get real pollutants out of smokestacks is irresponsible in the extreme. In point of fact, we need coal and nuclear power, and lots of both. Cheap, abundant energy is the key to economic growth, economic development, and human progress. If you want to stop the progress of civilization in its tracks, make energy expensive and scarce. Unfortunately, that is the avowed aim of many in positions of power in the Western world. /Mr Lynn Mr Lynn, You are exactly right. CO2 is not any more of a “pollutant” than H2O. It would be hard to find a substance that is less harmful and more beneficial to life. In fact, I would like to have someone identify a molecule that is less harmful and more beneficial than CO2 at the concentrations being discussed. I recall when the debate was over the proposed requirement to use scrubbers on smokestacks in coal-fired power plants, in the late 1960’s, IIRC. The stack scrubbers were made a requirement without too much controversy because as a regulatory requirement, it imposed the same financial burden on all coal plants, and the cost was then passed on to rate payers. The result is that more than 99.99% of all particulates [soot] are removed from coal plant emissions, which are now composed of pure CO2 and some water vapor. Pictures of smokestacks emitting smoke are either old photos, or they are photoshopped; you cannot see CO2 emissions. Politicians need to be asked what will be used to generate power, if coal and nuclear sources are not used; and what the cost comparison will be. People vote mostly with their pocketbooks in mind. Given a clear choice between using extremely cheap coal and nuclear power, or unreliable and very expensive alternative energy sources, most people will opt for the least expensive power. The problem comes in framing the argument that way — which is why every critic of CO2 runs away from any public, moderated debate on the issue. Cheap new generation nuclear power my backside! Same old over-optimistic tripe. see “Nuclear Renaissance Runs Into Trouble ” here: Never believe a nuclear salesman. Always ask who pays, who benefits and what are they ignoring in their sales pitch.
http://wattsupwiththat.com/2009/05/25/renewable-energy-%E2%80%93-our-downfall/
CC-MAIN-2015-35
refinedweb
69,772
60.85
Introduction Searching records is a common requirement in web applications. There is usually a requirement to allow users to quickly access the data they want from large records. While it is possible to do this using simple SQL queries, sometimes it is more efficient to use a search engine. Solr is a popular search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, near real-time indexing, dynamic clustering, database integration, rich document handling, and geospatial search. In this tutorial, we'll be looking at performing full text search using Sunspot, which is a library that enables integration of Solr in ruby applications. Project Setup I've created a simple app on Github which I'll be using here instead of starting with a new project. The app shows a list of products with their name, image, price and description. I have included some seed data so you can run rake db:seed if you don't want to input the data your self. The application uses Paperclip for image attachments and since I use image resizing, ImageMagick will need to be installed on your system. You'll also require the Java runtime installed on your machine to proceed with the tutorial. The image below shows the application. The search form at the top does nothing at the moment, but we will enable a user to search through the products and get results based on not just the product name, but also on its description. Searching We'll start off by including the Sunspot and Solr gems in our Gemfile. For development, we'll use the sunspot_solr gem that comes with a pre-packaged Solr distribution, therefore we won't need to install it separately. gem 'sunspot_rails' group :development do gem 'sunspot_solr' end Run bundle install and then run the following command to generate the Sunspot configuration file. rails generate sunspot_rails:install This creates the /config/sunspot.yml file which lets your app know where to find the Solr server. To set up the objects that you want indexed, add a searchable block to the objects. In the starter project, we have a Product model with name, price, description and photo fields. We will enable a full-text search to be done on the name and description fields. In /models/product.rb add: searchable do text :name, :description end Start the Solr server by running: rake sunspot:solr:start Sunspot indexes new records that you create, but if you already have some records in the database, run rake sunspot:reindex to have them indexed. We then add the code in the Products controller that will take the user's input and pass it to the search engine. In the code below, we call search on the Product model and pass in a block. We call the fulltext method in the block and pass in the query string that we want to be searched for. There are several methods we can use here to specify the search results we want. The search results are then assigned to @products which will be available to our view. def index @query = Product.search do fulltext params[:search] end @products = @query.results end Run the application and you should now be able to search through the available products. Solr will do a case insensitive search through the product names and descriptions using the word or phrase input. You can make one field hold more weight than the other to improve the relevancy of your search results. This is done with the boost method which is passed a value that determines the priority assigned to the different fields. The field with the highest value will carry more importance. In our application, we can specify the products which have the searched string in their name to be scored higher. We do this by making the following changes in /models/product.rb. searchable do text :name, :boost => 2 text :description end Reindex the records with rake sunspot:reindex and now the results with the searched term in the product name, will be placed higher than those with the term in the description. You can add more records to test this out. Faceted Browsing Faceted browsing is a way of navigating search data by way of various sets of associated attributes. For example, in our application, we can classify searches for products by price range and give counts of each range. First add price to the searchable method in /models/product.rb searchable do text :name, :boost => 2 text :description double :price end Then call facet in the controller. The products will be faceted by the range of their price in intervals of $100.00. Here we assume that all products cost less than $500. def index @query = Product.search do fulltext params[:search] facet :price, :range => 0..500, :range_interval => 100 with(:price, Range.new(*params[:price_range].split("..").map(&:to_i))) if params[:price_range].present? end @products = @query.results end In the view file, paste the following at the place you want to see the faceted results. <div class="row"> <h3>Search Results</h3> <ul> <% for row in @query.facet(:price).rows %> <li> <% if params[:price_range].blank? %> <%= link_to row.value, :price_range => row.value, :search => params[:search] %> (<%= row.count %>) <% else %> <%= row.value %> (<%= link_to "X", :price_range => nil %>) <% end %> </li> <% end %> </ul> </div> Now when you search for a term, there will be a list of facets showing how many results are in each price range. In our example application, if you search for the word 'camera', you will see the following list. 100.0..200.0 (2) 200.0..300.0 (1) 300.0..400.0 (1) Each item is a link and when clicked on, you will get a list of the products that meet your search term and that also fall into the price range you clicked on. The link passes the original search query and the chosen range to the index action. Since it passes the range as a string, we use Range.new(*params[:price_range].split("..").map(&:to_i)) to convert it back to a range. You could use conditional statements to output more user friendly links like $100 - $199 (2) instead of 100.0..200.0 (2) but we won't get into that here. Advanced Configurations There are some further configurations you can do on Solr to customize how it works. In its default, Sunspot performs full-text search by dividing the search string into tokens based on whitespace and other delimiter characters using a smart tokenizer called the StandardTokenizer. Then the tokens are lower cased and the exact words are searched for. This might be okay at times, but you might also want to configure the search engine to allow for human error or to allow queries to be made that aren't too strict. For instance, you might want to provide some synonyms to the engine so that when the user doesn't enter the exact text that is in your records, they might still find similar results. An example of this, is that you might have an item labeled 'ipod' in your records. You may provide synonyms like 'iPod', 'i-pod' and 'i pod' to increase the odds of users finding the data. Another useful functionality you could add is stemming, which will allow Solr to match different words with the same root. For example, if the user entered 'run', they would get results with 'run' and 'running'. Or if they searched for 'walk', the results will include data that contains 'walk', 'walking', 'walked', and so on. Solr settings are found in solr/conf/schema.xml and that is the file to modify to change the server's configuration. This is out of the scope of this tutorial, but for more on this, check out the advanced full-text config post and the Solr wiki. Conclusion Now to finish up, stop the Solr server by running: rake sunspot:solr:stop We have looked at how to use the Sunspot gem to utilize the Solr search engine in a Rails app. Besides the settings we have used, there are plenty more you can use to customize your search results. Be sure to check the Readme file for more options. Solr gives you the kind of searching capability that isn't easy to achieve with regular SQL queries. For simple apps, with a small amount of database records, SQL queries will do without much of a performance hit. But if you want something that is scalable, then it is worth looking into Solr or other available search engines. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/full-text-search-in-rails--cms-20638
CC-MAIN-2017-22
refinedweb
1,451
72.36
Linux kernel coding style¶: 1) Indentation¶.through;. 2) Breaking long lines and strings¶ Coding style is all about readability and maintainability using commonly available tools. The preferred limit on the length of a single line is 80 columns. Statements longer than 80 columns should be broken into sensible chunks, unless exceeding 80 columns significantly increases readability and does not hide information. Descendants are always substantially shorter than the parent¶ }(); } Also, use braces when a loop contains more than a single simple statement: while (condition) { if (test) do_something(); } 3.1) Spaces¶); Use one space around (on each side of) most binary and ternary operators, such as any of these: = + - < > * / % | & ^ <= >= == != ? : but no space after unary operators: & * + - ~ ! sizeof typeof alignof __attribute__ defined no space before the postfix increment & decrement unary operators: ++ -- no space after the prefix increment & decrement unary operators: ++ -- and no space around the . and -> structure member operators. Do not leave trailing whitespace at the ends of lines. Some editors with smart indentation will insert whitespace at the beginning of new lines as appropriate, so you can start typing the next line of code right away. However, some such editors do not remove the whitespace if you end up not putting a line of code there, such as if you leave a blank line. As a result,. 4) Naming¶ C is a Spartan language, and your naming conventions should follow suit. asinine - chapter 6 (Functions). For symbol names and documentation, avoid introducing new usage of ‘master / slave’ (or ‘slave’ independent of ‘master’) and ‘blacklist / whitelist’. - Recommended replacements for ‘master / slave’ are: - ‘{primary,main} / {secondary,replica,subordinate}’ ‘{initiator,requester} / {target,responder}’ ‘{controller,host} / {device,worker,proxy}’ ‘leader / follower’ ‘director / performer’ - Recommended replacements for ‘blacklist/whitelist’ are: - ‘denylist / allowlist’ ‘block. 5) Typedefs¶: - totally opaque objects (where the typedef is actively used to hide what the object is). Example: pte_tetc. opaque objects that you can only access using the proper accessor functions. Note Opaqueness and accessor functionsare not good in themselves. The reason we have them for things like pte_t etc. is that there really is absolutely zero portably accessible information there. - Clear integer types, where the abstraction helps avoid confusion whether it is intor long. u8/u16/u32 are perfectly fine typedefs, although they fit into category (d) better than here. Note Again - there needs to be a reason for this. If something is unsigned long, then there’s no reason to do typedef unsigned long myflags_t; but if there is a clear reason for why it under certain circumstances might be an unsigned intand under other configurations might be unsigned long, then by all means go ahead and use a typedef. - when you use sparse to literally create a new type for type-checking. -types and their signed equivalents which are identical to standard types are permitted – although they are not mandatory in new code of your own. When editing existing code which already uses one or the other set of types, you should conform to the existing choices in that code. - Types safe for use in userspace. In certain structures which are visible to userspace, we cannot require C99 types and cannot use the u32form above. Thus, we use __u32 and similar types in all structures which are shared with userspace.. 6) Functions¶.. Do not use the extern keyword with function prototypes as this makes lines longer and isn’t strictly necessary. 7) Centralized exiting of functions¶ Albeit deprecated by some people, the equivalent of the goto statement is used frequently by compilers in form of the unconditional jump instruction. The goto statement comes in handy when a function exits from multiple locations and some common work such as cleanup has to be done. If there is no cleanup needed then just return directly. Choose label names which say what the goto does or why the goto exists. An - saves the compiler work to optimize redundant code away ;) int fun(int a) { int result = 0; char *buffer; buffer = kmalloc(SIZE, GFP_KERNEL); if (!buffer) return -ENOMEM; if (condition1) { while (loop1) { ... } result = 1; goto out_free_buffer; } ... out_free_buffer: kfree(buffer); return result; } A common type of bug to be aware of is one err bugs which look like this: err: kfree(foo->bar); kfree(foo); return ret; The bug in this code is that on some exit paths foo is NULL. Normally the fix for this is to split it up into two error labels err_free_bar: and err_free_foo:: err_free_bar: kfree(foo->bar); err_free_foo: kfree(foo); return ret; Ideally you should simulate errors to test all exit paths. 8) Commenting at Documentation/doc-guide/ and scripts/kernel-doc for details. use. 9) You’ve made a mess of it¶))) (dir-locals-set-class-variables 'linux-kernel '((c-mode . ( (c-basic-offset . 8) (c-label-minimum-indentation . 0) (c-offsets-alist . ( (arglist-close . c-lineup-arglist-tabs-only) (arglist-cont-nonempty . (c-lineup-gcc-asm-reg c-lineup-arglist-tabs-only)) (arglist-intro . +) (brace-list-intro . +) (c . c-lineup-C-comments) (case-label . 0) (comment-intro . c-lineup-comment) (cpp-define-intro . +) (cpp-macro . -1000) (cpp-macro-cont . +) (defun-block-intro . +) (else-clause . 0) (func-decl-cont . +) (inclass . +) (inher-cont . c-lineup-multi-inher) (knr-argdecl-intro . 0) (label . -1000) (statement . 0) (statement-block-intro . +) (statement-case-intro . +) (statement-cont . +) (substatement . +) )) (indent-tabs-mode . t) (show-trailing-whitespace . t) )))) (dir-locals-set-directory-class (expand-file-name "~/src/linux-trees") 'linux-kernel) This will make emacs go better with the kernel coding style for C files below ~/src/linux-trees. Documentation/process/clang-format.rst for more details. 10) Kconfig configuration files¶.rst. 11) Data structures¶. 12) Macros, Enums and RTL¶: - macros that affect control flow: #define FOO(x) \ do { \ if (blah(x) < 0) \ return -EBUGGERED; \ } while (0) is a very bad idea. It looks like a function call but exits the calling function; don’t break the internal parsers of those who will read the code. -) 5) namespace collisions when defining local variables in macros resembling functions: #define FOO(x) \ ({ \ typeof(x) ret; \ ret = calc_ret(x); \ (ret); \ }) ret is a common name for a local variable - __foo_ret is less likely to collide with an existing variable. The cpp manual deals with macros exhaustively. The gcc internals manual also covers RTL which is used frequently with assembly language in the kernel. 13) Printing kernel messages¶ Kernel developers like to be seen as literate. Do mind the spelling of kernel messages to make a good impression. Do not use incorrect contractions_notice(), pr_info(), pr_warn(), pr_err(), etc. Coming up with good debugging messages can be quite a challenge; and once you have them, they can be a huge help for remote troubleshooting. However debug message printing is handled differently than printing other non-debug messages. While the other pr_XXX() functions print unconditionally, pr_debug() does not; it is compiled out by default, unless either DEBUG is defined or CONFIG_DYNAMIC_DEBUG is set. That is true for dev_dbg() also, and a related convention uses VERBOSE_DEBUG to add dev_vdbg() messages to the ones already enabled by DEBUG. Many subsystems have Kconfig debug options to turn on -DDEBUG in the corresponding Makefile; in other cases specific files #define DEBUG. And when a debug message should be unconditionally printed, such as if it is already inside a debug-related #ifdef section, printk(KERN_DEBUG …) can be used. 14) Allocating memory¶ The kernel provides the following general purpose memory allocators: kmalloc(), kzalloc(), kmalloc_array(), kcalloc(), vmalloc(), and vzalloc(). Please refer to the API documentation for further information about them. Documentation/core-api/memory-allocation.r¶ There appears to be a common misperception that gcc has a magic “make me faster” speedup option called inline. While the use of inlines can be appropriate (for example as a means of replacing macros, see Chapter 12), it very often is not. Abundant use of the inline keyword leads to a much bigger kernel, which in turn slows the system as a whole down, due to a bigger. 16) Function return values and names¶. 17) Using bool¶ The Linux kernel bool type is an alias for the C99 _Bool type. bool values can only evaluate to 0 or 1, and implicit or explicit conversion to bool automatically converts the value to true or false. When using bool types the !! construction is not needed, which eliminates a class of bugs. When working with bool values the true and false definitions should be used instead of 1 and 0. bool function return types and stack variables are always fine to use whenever appropriate. Use of bool is encouraged to improve readability and is often a better option than ‘int’ for storing boolean values. Do not use bool if cache line layout or size of the value matters, as its size and alignment varies based on the compiled architecture. Structures that are optimized for alignment and size should not use bool. If a structure has many true/false values, consider consolidating them into a bitfield with 1 bit members, or using an appropriate fixed width type, such as u8. Similarly for function arguments, many true/false values can be consolidated into a single bitwise ‘flags’ argument and ‘flags’ can often be a more readable alternative if the call-sites have naked true/false constants. Otherwise limited use of bool in structures and arguments can improve readability. 18) Don’t re-invent the kernel macros¶ sizeof_field(t, f) (sizeof(((t*)0)->f)) There are also min() and max() macros that do strict type checking if you need them. Feel free to peruse that header file to see what else is already defined that you shouldn’t reproduce in your code. 19) Editor modelines and other cruft¶. 20) Inline assembly¶ */); 21) Conditional Compilation¶.) Within code, where possible, use the IS_ENABLED macro to convert a Kconfig symbol into a C boolean expression, and use it in a normal C conditional: if (IS_ENABLED(CONFIG_SOMETHING)) { ... } The compiler will constant-fold the conditional away, and include or exclude the block of code just as with an #ifdef, so this will not add any runtime overhead. However, this approach still allows the C compiler to see the code inside the block, and check it for correctness (syntax, types, symbol references, etc). Thus, you still have to use an #ifdef if the code inside the block references symbols that will not exist if the condition is not met. At the end of any non-trivial #if or #ifdef block (more than a few lines), place a comment after the #endif on the same line, noting the conditional expression used. For instance: #ifdef CONFIG_SOMETHING ... #endif /* CONFIG_SOMETHING */ Appendix I) References¶ The C Programming Language, Second Edition by Brian W. Kernighan and Dennis M. Ritchie. Prentice Hall, Inc., 1988. ISBN 0-13-110362-8 (paperback), 0-13-110370-9 (hardback). The Practice of Programming by Brian W. Kernighan and Rob Pike. Addison-Wesley, Inc., 1999. ISBN 0-201-61586-X. GNU manuals - where in compliance with K&R and this text - for cpp, gcc, gcc internals and indent, all available from WG14 is the international standardization working group for the programming language C, URL: Kernel process/coding-style.rst, by greg@kroah.com at OLS 2002:
https://www.kernel.org/doc/html/v5.8/process/coding-style.html
CC-MAIN-2021-39
refinedweb
1,848
54.22
Curtis d'Entremont, 10/30/2006 This document describes the new set of API that was added to org.eclipse.help in 3.3 to allow Eclipse to read help content in any format. It allows you to plug-in Java classes that will be called on at appropriate times to provide the following types of content: The extender of help would typically supply an adapter plug-in that contributes one provider for each type of content (listed above), and exposes new extension points to accept any type of file. For example, if you want to write your help content in format XYZ, you can provide an adapter plug-in, say org.xyz.adapter, that exposes new extension points for XYZ content, reads the XYZ files then converts them into content that Eclipse help understands, and provides the content to Help using the new providers API. Providers are contributed via extension points. However there aren't any new extension points; a new element was added alongside the existing extension points where an element was used to supply the XML file. For example, to plug-in a TOC (table of contents) provider, use the new tocProvider element of the org.eclipse.help.toc extension point. The two exceptions are org.eclipse.help.contentProducer extension point for providing html/xhtml documents, and org.eclipse.help.base.luceneSearchParticipants, both of which already existed prior to 3.3 and have not changed. Here is an example of a TOC provider extension: <extension point="org.eclipse.help.toc"> <tocProvider class="org.myproject.MyTocProvider"> </tocProvider> </extension> And the implementation: /* * A simple provider that contributes a single book to the * table of contents named "Hello, World". */ public class MyTocProvider extends AbstractTocProvider { public ITocContribution[] getTocContributions(String locale) { ITocContribution contribution = new ITocContribution() { public String getId() { // a way to identify our book return "org.myplugin.contribution.helloworld"; } public String getCategoryId() { // our book does not belong to any category of books return null; } public boolean isPrimary() { // this is a primary, top-level contribution (a book) return true; } public IToc getToc() { return new IToc() { public String getLabel() { return "Hello, world"; } public String getHref() { return "/mydocs/helloworld.html"; } }; } public String getLocale() { // this provider only provides content for the en_US locale return "en_US"; } public String[] getExtraDocuments() { // there are no extra documents associated with this book return new String[0]; } public String getLinkTo() { // this book does not link into any other book return null; } }; return new ITocContribution[] { contribution }; } } Upon opening Help, you will see a single new static book in the table of contents with the label "Hello, World". As you may have noticed, an ITocContribution is equivalent to (or as powerful as) a toc file extension. That is, anything you can do with a toc XML file contributed via extension can be done with an ITocContribution. This is generally the pattern with the providers API. In fact, the XML files are not treated in any special way by the help system; they are parsed and contributed by a provider in org.eclipse.help.
http://www.eclipse.org/eclipse/platform-ua/documents/providers_api.html
crawl-001
refinedweb
499
52.19
ListView ListView is basically used when you have to show items in a vertically scrolling list. One of the best examples of it is our device’s Contact List which can be With ListView and the user can easily browse the required information, while scrolling up and down also we can divide between every item and set its height and color as per your UI design. Inside a ListView we will show a list of Text items by using TextView and pictures using ImageView also any other view or a combination of views. As ListView is basically used to display a large set of data, so it can not feasible to manually create list items for the complete data, so Android provides us with special Adapter classes that can be used to supply data from datasets to ListView. Given below are some of the main attributes which are most commonly used: As given below we have shown how we can add a ListView to your android application using the layout XML. Using Adapter with ListView Let’s see how you can use an Adapter to read data from an array so it will display it in the form of a List. We will define a ListView in the main layout XML file activity_main.xml in layout screen. So by using this, we have to define a ListView to be created in our MainActivity class. So we need a data and a View into which the dataset will be converted by the Adapter format. Here we have a simple Array with festivals names in it: String[] festivals = { "Diwali", "Holi", "Christmas", "Eid", "Baisakhi", "Halloween" }; As your data set has simple text values, so we can define a simple TextView to hold these values or populate the ListView. Does it sound confusing? Let it sink in. If your dataset would have had an image and some text along with it and then we can also define a TextView along with an ImageView to display the data in the List. So now we will create a new XML file with name list_item.xml in the layout folder, a TextView in it like this. Now its time for the finale, below we have the MainActivity.java class, in which we have used an ArrayAdapter to create text views from the data in the array, and create a list by supplying those view objects to the ListView. import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends AppCompatActivity { ListView listView; TextView textView; String[] festivals = { "Diwali", "Holi", "Christmas", "Eid", "Baisakhi", "Halloween" }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); listView = (ListView)findViewById(R.id.listView); textView = (TextView)findViewById(R.id.textView); final ArrayAdapter adapter = new ArrayAdapter(this, R.layout.list_item, android.R.id.textView, festivals); listView.setAdapter(adapter); listView.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView> adapterView, View view, int position, long l) { // TODO Auto-generated method stub /* appending Happy with festival name */ String value = "Happy " + adapter.getItem(position); /* Display the Toast */ Toast.makeText(getApplicationContext(), value, Toast.LENGTH_SHORT).show(); } }); } }
https://salesforcedrillers.com/learn-andriod/listview/
CC-MAIN-2022-40
refinedweb
537
56.66
Issues ZF-6966: Zend_Db forces 'naming convention' Description Here's my setup: db.params.... db.params.adapter = vendor_Db_Adapter_XXX Whenever I use this, Zend_Db::factory() does not honor my naming convention, but instead forces an upper case letter, thus making it {{Vendor_Db_Adapter_XXX}}, instead of {{vendor_Db_Adapter_XXX}} which leads to an error ({{Zend_Loader}} can't find the class, etc.). The quickfix is of course an include statement, which doesn't hurt anyway since I am using my custom adapter on each request, but I wanted to file this bug anyway to get it fixed. Wouldn't hurt to be more open in this case. I also know sure that this cannot be introduced/changed without breaking BC. Posted by Dolf Schimmel (Freeaqingme) (freak) on 2009-06-09T07:50:34.000+0000 Isn't this an issue with Zend_Loader rather than Zend_Db? Anyways, Zend_Loader is based on the ZF naming conventions. If you want to deviate from that I suppose you'll need to use your own autoloader (or try the fallback autoloader). Posted by Josh Butts (jimbojsb) on 2009-08-17T11:50:49.000+0000 @Dolf agree this is a Zend_Loader issue, and even then, not really an issue. Posted by Ralph Schindler (ralph) on 2009-08-17T12:36:57.000+0000 The problem is with the code in the factory that does this: Unfortunately, we cannot fix this until 2.0 time. I would postpone, but the likelyhood of this exact code making it into 2.0 is highly unlikely since 2.0 will more than likely take advantage of real namespaces. As Dolf mentioned above, this can be handled with a custom Zend_Loader_Autoloader. Posted by Ralph Schindler (ralph) on 2009-09-20T14:57:04.000+0000 Fixed with the solution in ZF-5606
http://framework.zend.com/issues/browse/ZF-6966?focusedCommentId=31708&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-35
refinedweb
292
74.9
appropriate module/hints on how to solve the following problem? Discussion in 'Perl Misc' started by Shark, May 5, 2007. - Similar Threads linux/JNI problem: looking for hintsmoongateclimber, Apr 15, 2004, in forum: Java - Replies: - 3 - Views: - 880 - moongateclimber - Apr 19, 2004 Re: module docstring, documentation,anything? please note is the module type/object NOT some moduleMaric Michaud, Jun 24, 2006, in forum: Python - Replies: - 0 - Views: - 7,343 - Maric Michaud - Jun 24, 2006 hints for find-as-you-type hints, Apr 8, 2009, in forum: Java - Replies: - 2 - Views: - 619 Problem generating appropriate WSDL from C# classes generated from an XSDAndy, May 4, 2004, in forum: ASP .Net Web Services - Replies: - 0 - Views: - 350 - Andy - May 4, 2004 How to use Perl Graph module to solve travel salesperson problem (TSP)?Meng, Mar 17, 2005, in forum: Perl Misc - Replies: - 11 - Views: - 883 - John Bokma - Mar 21, 2005 new module Gimp::ScriptFu::Client appropriate namespace ?Alan Stewart, Feb 9, 2007, in forum: Perl Misc - Replies: - 0 - Views: - 168 - Alan Stewart - Feb 9, 2007 tigra javascripts hints problemMZ, Dec 30, 2007, in forum: Javascript - Replies: - 1 - Views: - 305 - K. - Dec 31, 2007 Is a Metaclass the appropriate way to solve this problem?Matthew Lefavor, Aug 7, 2013, in forum: Python - Replies: - 0 - Views: - 173 - Matthew Lefavor - Aug 7, 2013
http://www.thecodingforums.com/threads/appropriate-module-hints-on-how-to-solve-the-following-problem.902959/
CC-MAIN-2016-36
refinedweb
218
65.15
CLucene - a full-featured, c++ search engine API Documentation #include <TermVector.h> The vector consists of the name of the field, an array of the terms tha occur in the field of the org.apache.lucene.document.Document and a parallel array of frequencies. Thus, getTermFrequencies()[5] corresponds with the frequency of getTerms()[5], assuming there are at least 5 terms in the Document. The org.apache.lucene.document.Fieldable name. Array of term frequencies. Locations of the array correspond one to one to the terms in the array obtained from getTerms method. Each location in the array contains the number of times this term occurs in the document or the document field. The size of the returned array is size() Just like indexOf(int32_t) but searches for a number of terms at the same time. Returns an array that has the same size as the number of terms searched for, each slot containing the result of searching for that term number. Solve the diamond inheritence problem by providing a reinterpret function. No dynamic casting is required and no RTTI data is needed to do this clucene.sourceforge.net
http://clucene.sourceforge.net/doc/html/classlucene_1_1index_1_1TermFreqVector.html
CC-MAIN-2017-22
refinedweb
189
55.44
chdir - change working directory #include <unistd.h> int chdir(const char *path); The chdir() function shall cause the directory named by the pathname pointed to by the path argument to become the current working directory; that is, the starting point for path searches for pathnames not beginning with '/'. Upon successful completion, 0 shall be returned. Otherwise, -1 shall be returned, the current working directory shall remain unchanged, and errno shall be set to indicate the error. The chdir() function shall fail if: - [EACCES] - Search permission is denied for any component of the pathname. - name is not a directory. The ch}. Changing the Current Working Directory The following example makes the value pointed to by directory, /tmp, the current working directory.#include <unistd.h> ... char *directory = "/tmp"; int ret; ret = chdir (directory); None. The chdir() function only affects the working directory of the current process. None. getcwd(), the Base Definitions volume of IEEE Std 1003.1-2001, <unistd.h> First released in Issue 1. Derived from Issue 1 of the SVID. The APPLICATION USAGE section is added.: - The [ELOOP] optional error condition is added.
http://pubs.opengroup.org/onlinepubs/009695399/functions/chdir.html
CC-MAIN-2015-22
refinedweb
183
58.69
In this tutorial, we will cover the difference between r-squared and adjusted r-squared. It includes detailed theoretical and practical explanation of these two statistical metrics in R. R-squared (R²) It measures the proportion of the variation in your dependent variable explained by all of your independent variables in the model. It assumes that every independent variable in the model helps to explain variation in the dependent variable. In reality, some independent variables (predictors) don't help to explain dependent (target) variable. In other words, some variables do not contribute in predicting target variable. Mathematically, R-squared is calculated by dividing sum of squares of residuals (SSres) by total sum of squares (SStot) and then subtract it from 1. In this case, SStot measures total variation. SSreg measures explained variation and SSres measures unexplained variation. As SSres + SSreg = SStot, R² = Explained variation / Total Variation R-Squared is also called coefficient of determination. It lies between 0% and 100%. A r-squared value of 100% means the model explains all the variation of the target variable. And a value of 0% measures zero predictive power of the model. Higher R-squared value, better the model. Adjusted R-Squared It measures the proportion of variation explained by only those independent variables that really help in explaining the dependent variable. It penalizes you for adding independent variable that do not help in predicting the dependent variable. Adjusted R-Squared can be calculated mathematically in terms of sum of squares. The only difference between R-square and Adjusted R-square equation is degree of freedom. In the above equation, dft is the degrees of freedom n– 1 of the estimate of the population variance of the dependent variable, and dfe is the degrees of freedom n – p – 1 of the estimate of the underlying population error variance. Adjusted R-squared value can be calculated based on value of r-squared, number of independent variables (predictors), total sample size. Difference between R-square and Adjusted R-square - Every time you add a independent variable to a model, the R-squared increases, even if the independent variable is insignificant. It never declines. Whereas Adjusted R-squared increases only when independent variable is significant and affects dependent variable. - Adjusted r-squared can be negative when r-squared is close to zero. - Adjusted r-squared value always be less than or equal to r-squared value. In the table below, adjusted r-squared is maximum when we included two variables. It declines when third variable is added. Whereas r-squared increases when we included third variable. It means third variable is insignificant to the model. Which is better? Adjusted R-square should be used to compare models with different numbers of independent variables. Adjusted R-square should be used while selecting important predictors (independent variables) for the regression model. R : Calculate R-Squared and Adjusted R-Squared Suppose you have actual and predicted dependent variable values. In the script below, we have created a sample of these values. In this example, y refers to the observed dependent variable and yhat refers to the predicted dependent variable. y = c(21, 21, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2)Final Result : R-Squared = 0.6410828 yhat = c(21.5, 21.14, 26.1, 20.2, 17.5, 19.7, 14.9, 22.5, 25.1, 18) R.squared = 1 - sum((y-yhat)^2)/sum((y-mean(y))^2) print(R.squared) Let's assume you have three independent variables in this case. n = 10In this case, adjusted r-squared value is 0.4616242 assuming we have 3 predictors and 10 observations. p = 3 adj.r.squared = 1 - (1 - R.squared) * ((n - 1)/(n-p-1)) print(adj.r.squared) Python : Calculate Adjusted R-Squared and R-Squared import numpy as np y = np.array([21, 21, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2]) yhat = np.array([21.5, 21.14, 26.1, 20.2, 17.5, 19.7, 14.9, 22.5, 25.1, 18]) R2 = 1 - np.sum((yhat - y)**2) / np.sum((y - np.mean(y))**2) R2 n=y.shape[0] p=3 adj_rsquared = 1 - (1 - R2) * ((n - 1)/(n-p-1)) adj_rsquared SAS : Adjusted R-Squared Calculation data temp;input y yhat;cards;21 21.521 21.1422.8 26.121.4 20.218.7 17.518.1 19.714.3 14.924.4 22.522.8 25.119.2 18;run;data out2;set temp ;d=y-yhat;absd=abs(d);d2 = d**2;run;/* Residual Sum of Square */proc means data = out2 ;var d2;output out=rss sum=;run;data _null_;set rss;call symputx ('rss', d2);run;%put &RSS.;/* Total Sum of Square */proc means data = temp ;var y;output out=avg_y mean=avg_y;run;data _null_;set avg_y;call symputx ('avgy', avg_y);run;%put &avgy.;data out22;set temp ;diff = y - &avgy.;diff2= diff**2;run;proc means data = out22 ;var diff2;output out=TSS sum=;run;data _null_;set TSS;call symputx ('TSS', diff2);run;/* Calculate the R2 */%LET rsq = %SYSEVALF(1-&RSS./&TSS);%put &RSQ;/* Calculate the Adj R2 */%LET N = 10;%LET P = 3;%let AdjRsqrd= %SYSEVALF(1 -((1-&rsq)*(&N-1)/(&N-&P-1)));%PUT &AdjRsqrd; Could you please explain RMSE, AIC and BIC as well. We use RMSE to compare our model. Lesser the value is good for our model, but I m not sure about the rest of the statistics AIC and BIC respectively.. Nicely Explained nice explaination so it means always adjusted r square will <= Rsquare Could you give please the data set in order to understand the difference better. R squared depends on the sum of squared errors (SSE), il SSE decreases (new predictor has improved the fit) then R squared increases. In this case R squared is a good measure. Please give us a complete example to understand. Thank you its really good effort to explain it.. Also explain BIC and AIC
https://www.listendata.com/2014/08/adjusted-r-squared.html?showComment=1476163855938
CC-MAIN-2020-05
refinedweb
1,009
58.79
I’m trying to get a flattened array of children (down to a certain depth) of a folder (which is using acts_as_tree). def recursive_find_depth(depth) if self.children.length > 0 and depth != 0 self.children.each do |child| puts child.name child.recursive_find_depth(depth -= 1) end end end At the moment, the function ‘puts’ the right output, i.e. all the children under a certain folder to a certain depth. However, I completely stuck about how to get the finished array of children out of this function (this shouldn’t be hierarchal, just a flattened array). If anyone’s wondering what I’m using this for - it’s a webdav system (PROPFIND). Thanks in advance
https://www.ruby-forum.com/t/recursively-finding-children/76176
CC-MAIN-2021-25
refinedweb
115
68.57
py First, we create an AddressParser. AddressParser allows us to feed in lists of cities, streets, and address suffixes. Then we call parse_address on our address string, which returns an Address instance with all the attributes filled out. From there, we can print parts of the address, change them, validate them, create a database model to store them, or anything else. from address import AddressParser, Address ap = AddressParser() address = ap.parse_address('123 West Mifflin Street, Madison, WI, 53703') print "Address is: {0} {1} {2} {3}".format(address.house_number, address.street_prefix, address.street, address.street_suffix) > Address is: 123 W. Mifflin St. AddressParser AddressParser(self, suffixes=None, cities=None, streets=None) suffixes, cities, and streets all accept lists as arguments. If you leave them as none, they will read default files from the package, namely suffixes.csv, cities.csv, and streets.csv. Streets is intentionally blank. You can provide lists of acceptable suffixes, cities, and streets to lower your false positives. If you know all the addresses you are processing are in a small area, you can provide a list of the cities in the area and should get more accurate results. If you are only doing one city, you could provide that single city in a list, and a list of all streets in that city. Address Addresses get returned by AddressParser.parser_address(). They have the following attributes: house_number The number on a house. This is required for all valid addresses. E.g. 123 W. Mifflin St. street_prefix The direction before the street name. Always represented as one or two letters followed by a period. Not required. E.g. 123 W. Mifflin St. street The name of the street. Potentially multiple words. This is required for a valid address. E.g. 123 W. Mifflin St. street_suffix The ending of a street. This will always be the USPS abbreviation followed by a period. Not required, but highly recommended. E.g. 123 W. Mifflin St. apartment Apartment number or unit style or any number of things signifying a specific part of an address. Not required. E.g. 123 W. Mifflin St. Apt 10 buiding Sometimes addresses are grouped into buildings, or are more commonly known as by building names. Not required, and often in parathenses. E.g. 123 W. Mifflin St. Apt 10 (The Estates) city The city part of the address, preferably following a comma. E.g. 123 W. Mifflin St., Madison, WI 53703 state The state of the address, preferably following the city and a comma. Always two capitalized letters. E.g. 123 W. Mifflin St., Madison, WI 53703 zip The 5 or 9 digit zip code of the address, preferably following the state. 9 digit zips supported in the format (xxxxx-xxx). E.g. 123 W. Mifflin St., Madison, WI 53703 full_address() Returns a human readable version of the address for display. Follows the same style rules as the above attributes. Example return: (The Estates) 123 W. Mifflin St. Apt 10, Madison, WI 53703 Todo - Add verification of an address through Google Maps API, given an API key. - Allow custom validation conditions in AddressParser for what counts as a correct address or not. - Add exceptions for incorrect addresses instead of silent failing and letting user validate. 1.2 - Added handling of 9 digit zip codes - Updated city database to National Weather Service file from 8 August 2012 - Forked original address repository and continuing work at GitHub File support requests and obtain the source from License and Copyright Copyright (c) 2013 Swoop Search LLC. Copyright (c) 2013 Josh Gachnang. This library is released under the New BSD License. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyaddress/
CC-MAIN-2017-51
refinedweb
618
66.33