Thursday, May 2, 2013 7:33:35 PM
After updating nvidia drivers to nvidia-current-updates
you may find that OpenCL has stopped working. But panic not. It's not that these drivers don't support OpenCL, it's just that OpenCL is not installed correctly.
OpenCL uses ICD
(Installable Client Driver). And that's the thing that is commonly broken.
If you'll search for .icd you'll find the nvidia.icd
hidding somewhere in /usr/share
In case of experimental drivers it's in /usr/share/nvidia-experimental-310/nvidia.icd
. What you have to do is make a symlink in /etc/OpenCL/vendors
sudo ln -s /usr/share/nvidia-experimental-310/nvidia.icd /etc/OpenCL/vendors/nvidia.icd
If you'll open the file with a text editor you will find, that it simply holds a library name (in my case libnvidia-opencl.so.1
). You need to make that library available to the system by creating a symlink to it in one of LD_LIRBARY_PATH directories. /usr/lib
will always work.
sudo ln -s /usr/lib/nvidia-experimental-310/libnvidia-opencl.so.1 /usr/lib/libnvidia-opencl.so.1
This also works fine with NVIDIA cards running Optimus. You just need to run OpenCL apps with Bumblebee (optirun
), just as you would run a normal OpenGL app.
Tuesday, April 30, 2013 9:59:56 PM
is a a long awaited major leap forward in PC hardware. While we have CPU and GPU combined into one chip for quite some time, this brings a limited advantage over having them separate. What Kaveri brings is a shared memory architecture, that is GPU can use the same memory CPU does and vice versa. Including virtual memory.
Why am I calling this a "major leap forward"?
In all kinds of applications involving GPU, whether it's general purpose computations (using something like CUDA or OpenCL) or games a lot of time is actually spent copying memory buffers back and forth between CPU memory (RAM) and GPU memory. And all that copying is quite expensive. Having a more direct access to GPU memory is one of the reasons why game consoles are so much better at running games when compared with the same generation of PC hardware.
Games do a lot of copying. You need to load textures to GPU memory. First they are loaded to CPU memory and only then transfered to GPU memory. In case of games that use mega texturing (like Rage) it happens almost every frame. Also all games would pass vertex data, shader parameters, maybe even do some processing on buffers produced by GPU on CPU. If a game uses GPU accelerated physics there's even more data that is exchanged with CPU.
With Kaveri all of the data produced by CPU can be immediately made available for GPU to use.
Not quite. While this all sounds awesome, there are some catches. While a faster direct use of GPU memory will be available as a hardware feature, it might take some time for software to catch up. While OpenCL is built with such capability in mind and some programs might even work faster out of the box, APIs like DirectX and OpenGL are a bit different. There will have to be an update for those APIs to enable this kind of zero copy memory sharing. While OpenGL can add this through extensions, DirectX is a bit more complicated.
Monday, February 28, 2011 7:15:18 PM
Today I have setup maven repository for Dirmi
. A nice bidirectional remoting sollution, intended to replace RMI.
Sharing it with the world.
Including into your project:
I hope you'll find this useful.
Friday, January 28, 2011 8:37:47 PM
Today I read the post by Internets highly respected author Neil McAllister where he responded to WHATWGs plans to drop HTML version numbers. You can read the whole article here
He claims that dropping HTML version numbers will cause troubles for web application authors. Cite: "Once they do, their customers will end up with a browser that supports some form of HTML as it was specified at some point in time. Without so much as a version number to go by, it will be virtually impossible for the customer to understand -- or even express -- just what form of HTML that actually is."
Why would anyone care? It's not the standard version that matters, but separate features! And it's a common practice to check for the presence of the feature rather than supported HTML version. New HTML spec comes out once in few years, but separate features may show up any time. Why not start supporting it before its done? Lots of HTML5 features are already being used and users benefit a lot from those.
My suggestion is to complete separate feature standards and rely on that, rather than a whole pack of them.
Wednesday, December 1, 2010 9:30:35 PM
Google Profile page shows:
Your profile is not yet eligible to be featured in Google search results
To have your profile featured, add more information about yourself.
Add more info to my profile | Learn more
But it's Google Search I learnt from that I have this profile.
Sunday, October 24, 2010 5:17:41 PM
Today I decided to run a SunSpider
test on 4 popular browsers for Linux. Tested on my Asus F3Ka laptop. Here are the results:
- Opera 10.63 - 1300.2ms
- FireFox 3.6.11 - 3002.0ms
- Chromium 6.0.472.63 - 1096.2ms
- Rekonq 0.6.1 - 1678.4ms
It seems that FireFox is really behind all the other browsers. Unexpected I would say.
Thursday, October 21, 2010 7:53:24 PM
I'm running a server in my room and one day got curious, just how much power does the thing consume. Got a power meter from a friend and started testing.
- CPU: AMD Phenom 9600 Black Edition (4 x 2.3Ghz)
- GPU: NVIDIA GeForce 7025 (the monitor was not plugged in)
- RAM: 2 x 2GB DDR2 800Mhz (running at 533Mhz I believe)
- Motherboard: ASRock N68C-S UCC (nForce 630a chipset)
- Additional cards: VIA Rhine III network adapter
- HDD: Samsung SATA drive
- Power supply: 420W
Server runs Debian 5.0.
Booting up showed around 90W of power consumption, when it was done loading it dropped to 63W. That's how much it uses when idle and running all 4 cores at 2.3Ghz.
Full load test
Ran a 4-threaded application (a continuous loop which performs some simple math). All cores ran at 2.3Ghz. After I tested it with 4 cores I began turning cores off one by one. Here's the result:
Turning off cores when CPU is idle didn't help much. Went from 63W to 61W.
Full load at 1.2Ghz
1 core: 73W
2 cores: 82W
4 cores: 102W
So one core takes ~10W of power when running at 1.2Ghz and full load.
1.2Ghz vs 2.3Ghz
So how much does the scaling help? Going from 1.2Ghz to 2.3Ghz adds about another 10W per core.
AMD+NVidia platform is amazing. Drawing 62W when idle... WOW. My Asus F3Ka laptop with Turion TL-60 (2x2Ghz) uses 44W on wifi, full brightness when idle at 2x800Mhz and 80W at full speed. The difference isn't all that big. What can I say, AMD did a fine job.
Friday, August 20, 2010 7:55:48 PM
I always thought world needs a fast MVC framework for Java. As flexible and nice as say Ruby on Rails or Grails, but where controller actions where written in a faster languages such as Java. Current Java MVC frameworks (suchs as Struts or Spring Web MVC) are far from the beauty the former frameworks provide.But...
Today I was enlightened by my friend, who showed the Play Framework:http://www.playframework.org/
The answer to Java web developer prayers. Thank you.