Showing posts with label graphics. Show all posts
Showing posts with label graphics. Show all posts

Wednesday, June 24, 2020

SVG JS/ES Asteroids Video Game for Modern Chrome Browser

Browser-Based Asteroids-clone Video Game

Using 2020 JavaScript features available in Chrome browser

Earlier this year, during the beginning of the major social-distancing requirements (due to COVID), I finally decided to spend a few weeks of my free time writing a video-game clone that would make use of some of the newest features of JS / ECMAScript.   I ended up writing a clone of the famous 1979 Asteroids video game, but with all sorts of extra features and improvements (as I deem them).

JavaScript Features Used

  • requestAnimationFrame / cancelAnimationFrame  — which keeps the game frame-rate flowing nicely even as many Space Rocks and UFOs are threatening your existence.
  • Asynchronous code — a nice bit of async function and await / Promise usage!
  • JS Classes  — including plenty of encapsulation, inheritance, static variables and such
  • Game-pad support  — in addition to keyboard controls, I used the standard JS gamePad object to implement Xbox (or similar controller) support
  • Sound — without any extra files, but rather using audioCtx oscillator features

CSS / SVG Features

  • Animation — CSS animation of fills, strokes, etc.
  • SVG Symbol and Def — for maximum re-use via SVG use.
In the end, I was able to achieve a LOT of interesting visual-effects without the need to resort to a lot of custom animation code. 

UFO-Infested Space Rocks Video Game

I placed my UFO-Infested Space Rocks Video Game online at Github for anyone that wants to play with it.  The game play evolves as you go, presenting "smarter" aliens and an ever more frenetic pace.  The game works wonderfully in modern Chrome browser on the PC.  There are issues with it in Firefox.  I have not even tried to use it on a phone or whatever (no idea how keyboard controls or game controller logic would make any sense there).

Further Software and Technology Reading

Continue to read this Software Development and Technology Blog for computer programming articles (including useful free / OSS source-code and algorithms), software development insights, and technology Techniques, How-To's, Fixes, Reviews, and News — focused on Dart Language, SQL Server, Delphi, NVIDIA CUDA, VMware, Typescript, JavaScript / ECMAScript, SVG, other technology tips and how-to's, plus my varied political and economic opinions.

Wednesday, September 05, 2012

VMware ESXi 5.1 New Features and vSphere 5.1 New Features

This blog is a followup to the VMware ESXi 5.0 New Features posting from just over a year ago. VMware has released to the public the details of new features in VMware ESXi 5.1 and vSphere 5.1 and I will cover those new 5.1 features here, though if you are new to the 5.x series, the prior blog may still be quite interesting also. Spoiler: one huge new "feature" in 5.1 is removal of the vRAM limits! Let's look at all this more...


New Features in VMware ESXi 5.1


vRAM Memory Limits Removed: the biggest non-feature "feature"!

What does it say about a product when the biggest "feature" is simply un-doing / correcting a blunder made by upper-management at a company? If you remember the fiasco surrounding the new vRAM memory limits imposed by ESXi/vSphere 5.0, you know to what I refer. VMware's attempts to squeeze more cash out of customers by imposing what amounted to a RAM-tax upon their robust server boxes backfired (i.e., irked customers, like me). And, they have now un-done that mistake. ESXi/vSphere 5.1 is supposed to now be priced (solely) on a per-CPU-socket basis rather than on a strange and ridiculous combo of sockets/virtual memory used/VMs-being-managed. That is a good thing: I actually stuck with ESXi 4.1 due to the 5.0 vRAM bull@#! So, version 5.1 is on my radar.

Support for Newer Hardware

Not surprisingly, this latest 5.1 release includes support for bigger and more recent computing hardware (both Intel and AMD). In addition, the virtualization hardware-abstraction layer has been upgraded to a new "Version 9 virtual hardware" that includes support for Intel's VT-x with Extended Page Tables virtualization assistance features and the AMD-V with Rapid Virtualization Indexing (RVI) (nee, "nested page tables"). This VT-x/EPT and AMD-V/RVI support is to partially reduce hypervisor and virtual machine (VM) guest operating system overhead imposed on the physical processors (your server's CPUs).

One nice feature that comes along with this latest 5.1 version is, unlike with the 5.0 release, it is possible to allow any VM generated on VMware ESX Server 3.5 or later to continue to run on ESXi 5.1 unchanged (i.e., without being forced to shut down, update to the version 9 virtual hardware, and restart). Of course, if you want the latest features of the VM and hypervisor that come with "version 9 virtual hardware", you will have to update your VMs to get it, but at least you have the option to postpone the virtual-hardware upgrade task until it is convenient.

New Adobe-Flash Web-Based Management Client for vSphere 5.1

Yes, you read right: a new Flash-based management client! (actually, it was written in Apache Flex, which uses Flash to run applications built with Flex). I personally am OK with this as I have worked with some very capable Flash-based applications. The old management client is still able to interact with vSphere 5.1 applications, but features that are new to vSphere 5.1 will only be available in the Flash-based web interface client. Sure, it means that you need Flash installed on whatever machine you plan to manage your virtualization setup from, but such is. I already need Flash for so many other things that this is a given.

The new UI is peppy, stable, and secure from what reviewers are saying so far. And, it offers an advantage of performing some potentially-long-running-tasks asynchronously (threaded) so as to prevent UI lockup that could occur in the previous management UIs. And, the fact is, Flash-based UIs should look and behave identically on any device that can run Flash — which surely cannot be said of HTML-based UIs!

Virtual Machine Hardware-Accelerated 3D Graphics Support

Maybe VMware read my past blog where I stated (how in ESXi 5.0) that I felt "something is amiss: where is the Nvidia CUDA / vGPU support in ESXi 5.0? Well, it turns out VMware is noticing the importance of offloading processing to GPUs after all:
With vSphere 5.1, VMware has partnered with NVIDIA to provide hardware-based vGPU support inside the virtual machine. vGPUs improve the graphics capabilities of a virtual machine by off-loading graphic-intensive workloads to a physical GPU installed on the vSphere host. In vSphere 5.1, the new vGPU support targets View environments that run graphic-intensive workloads such as graphic design and medical imaging.

Hardware-based vGPU support in vSphere 5.1 is limited to View environments running on vSphere hosts with supported NVIDIA GPU cards [well, duh] (refer to the VMware Compatibility Guide for details on supported GPU adapters). In addition, the initial release of vGPU is supported only with desktop virtual machines running Microsoft Windows 7 or 8. Refer to the View documentation for more information on the vGPU capabilities of vSphere 5.1.

NOTE: vGPU support is enabled in vSphere 5.1, but the ability to leverage this feature is dependent on a future release of View. Refer to the View documentation for information on when this feature will be available.
Hmmmm... I am not too keen on that final caveat / disclaimer (about "future release" and timeline), but it sure sounds better than the lack of information about NVidia GPU support in previous releases! I am definitely intrigued by this since I play around a bit with CUDA code, but I am not specifically seeing "CUDA" mentioned here. I wonder how far this "off-loading" goes?

Other New and Enhanced ESXi / vSphere 5.1 Features

In no particular order...
  • Windows 8 desktop and Windows Server 2012 support. Nothing I personally plan to use in production anytime soon, but support is there for the latest Microsoft operating systems. I do have intentions of trying these latest OS offerings out, and VMs are the only way I would even consider it; so, good thing they are supported.
  • ESXi 5.1 has improved CPU virtualization methods ("virtualized hardware virtualization", or VHV) that is supposed to allow near-native-access to the physical CPU(s) by your virtualized guest OS's. We all like more speed in our VMs, so this sounds like a plus.
  • ESXi now has the ability to perform a VM-live-migration between two separate physical servers (running ESXi) without the need for both machines to be attached to the SAN. I need to read up on this and fully understand what that means... like, do I need a SAN at all anymore for this?
  • CPU counter and hardware-assisted-virtualization information can now be exposed to guest operating systems (useful to developers that need to debug / tune applications meant to run in a VM).
  • New Storage Features including: read-only file sharing on a VMFS volumes have been increased to 32 (from 8); Space Efficient Sparse Virtual Disks with automated mechanisms for reclaiming stranded space plus a dynamic block allocation unit size (tune-able to storage/apps needs); 5 Node MSCS Cluster (vs. 2 node); jumbo frame support for all iSCSI adapters (with UI support too); and, Boot from Software FCoE.
  • The reliance on a shared "root" user account (for administrators) was eliminated and support was added for SNMPv3. Local users assigned administrative privileges automatically get full shell access and no longer must "su" (sudo) to root to run privileged commands. This makes for finer-grained auditing and monitoring, which is a plus in shared environments.
  • With vSphere 5.1 Guest OS Storage Reclamation feature: when files are removed from inside the guest OS, the size of the VMDK file can be reduced and the deallocated storage space returned to the storage array’s free pool (utilizes new SE sparse VMDK format available with View); but note, this feature carries with it the same disclaimer that the NVIDIA stuff did — i.e., "dependent on a future release of View". Argghh. Wonder how far in the future that may be?


Conclusion

There are a fair number of new features in this latest release of ESXi 5.1 and vSphere 5.1 that are worth checking out, even though some significant ones are "dependent on future releases of View". The timing of this ESX / vSphere release goes along with the latest VMware Workstation, which I discuss here too: VMware Workstation 9.0 New Features of Interest — if you are interested in the desktop-product side of things.

Continue to read this Software Development and Technology Blog for computer programming articles (including useful free / OSS source-code and algorithms), software development insights, and technology Techniques, How-To's, Fixes, Reviews, and News — focused on Dart Language, SQL Server, Delphi, Nvidia CUDA, VMware, TypeScript, SVG, other technology tips and how-to's, plus my varied political and economic opinions.

Sunday, December 18, 2011

Nvidia CUDA Toolkit 4.1 and Parallel Nsight 2.1

Nvidia's CUDA technology has been around for 5 years now, and only 6 months ago blogged about Nvidia's CUDA technology when the CUDA Toolkit 4.0 was released.  Nvidia is keeping up the pace of innovation with a substantive upgrade to both Nvidia CUDA Toolkit (with version 4.1) and Parallel Nsight (now at version 2.1).  

CUDA: What is it?


CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit) for applications including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.  The current Nvidia (NASDAQ:NVDA) "Fermi" line of GPUs (Graphical Processing Units) provides incredibly powerful parallel computing within reach of most individual users and businesses through rather affordable Nvidia Graphics Cards (and, the upcoming Nvidia "Kepler" GPUs for early 2012 will only be better, faster, and more efficient).

Note: many of these latest CUDA features require a "Fermi"-based GPU (and, using the LLVM-based compiler does).  These cards are worth investing in if you plan to do any CUDA development.  You can get a Fermi-based CUDA-Capable Graphics Card that is quite affordable and power-efficient: I rather like my Quadro 600 card (~$160.00) which uses only 40W for 96 CUDA-processing cores; this card has been very capable for running all my development work on.

New in Nvidia CUDA Toolkit 4.1

LLVM Compiler / Toolchain Support

Nvidia CUDA Toolkit 4.1 now includes a new LLVM-based CUDA compiler along with over 1000 new image processing functions, plus a redesigned Visual Profiler.  Integrating the open source Low Level Virtual Machine (LLVM) toolchain support definitely has my attention (LLVM is a collection of modular and reusable compiler and toolchain technologies).

The first notable benefit of the LLVM compiler is that Nvidia claims this compiler delivers up to 10% faster performance for many applications (compared to their prior in-house developed C/C++ compiler).

But, what strikes me as the most (potentially) important aspect of this move to LLVM is that we could potentially soon see more (programming) language support for using CUDA outside of just C/C++ and/or additional CPU support.  Nvidia has apparently used the Clang C and C++ compilers within the LLVM framework and has hooked in support for the CUDA parallel development environment.

Although Nvidia's (CUDA C and CUDA C++) compiler modifications are not open-sourced, LLVM will provide a foundation for more easily adding language/processor support.  Given Apple's use of LLVM on ARM (platform), I have to wonder if this is going to be a build-target in the not too distant future.  There are also open-source projects for other programming languages to make use of the LLVM toolchain, so the potential does exist for accessing CUDA / GPU-support from other domain-specific languages eventually (perhaps Java, Python, etc) directly.

Other Major New Features in CUDA Toolkit 4.1
(from Nvidia website, with some added comments and details)

New & Improved “Drop-In” Acceleration With GPU-Accelerated Libraries

  • Over 1000 new image processing functions in the NPP (Nvidia Performance Primitives) library — this brings to total number of NPP functions to 2200+. These GPU-accelerated functions (building blocks) for image and signal processing include capabilities geared toward arithmetic, logic, conversion, statistics, filters, and more; also, these can execute on the GPU at up to 40x (yes, 40 times!) the speed of Intel IPP (Integrated Performance Primitives).  This is great for media, entertainment, and visual processing applications.
  • New Boost style placeholders in Thrust CUDA C++ template library which allow inline functors now.  Thrust includes optimized functions for sort, reduce, scan operations and so on.
  • New cuSPARSE tri-diagonal solver up to 10x faster than MKL on a 6 core CPU; this also includes up to 2x faster sparse matrix vector multiplication using ELL hybrid format 
  • New support in cuRAND for MRG32k3a and Mersenne Twister (MTGP11213) RNG algorithms 
  • Bessel functions now supported in the CUDA standard Math library 
  • CuFFT (Fast Fourier Transforms) library has a thread-safe API now (callable from multiple host-threads); also, substantial improvements in speed!
  • CuBLAS level 3 performance improvements up to 6X over Intel MKL (Math Kernel Library)
  • Batched-GEMM API for more efficient processing of many small matrices (i.e., 4x4 through 128x128 matrices; up to 4X speedup over MKL); up to 1 TFLOPS sustained performance (yes, a teraflop!  Wow)
  • Average and rounded-average functions (e.g., hadd / rhadd - signed and unsigned)

Enhanced & Redesigned Developer Tools (On Windows, Mac, & Linux)

  • Redesigned Visual Profiler with automated performance analysis and expert guidance (guided workflow and drill-down expert guidance); during an online presentation, this was described as "almost like having an Nvidia engineer in a box", which sure sounds handy!  You should benefit from the experience of those engineers, and be helped along through attaining best-practice outcomes with these built-in automated analyses/experts.
  • Assert() in device code - helpful for debugging!
  • CUDA_GDB support for multi-context debugging and assert() in device code
  • CUDA-MEMCHECK now detects out of bounds access for memory allocated in device code
  • Parallel Nsight 2.1 CUDA warp watch visualizes variables and expressions across an entire CUDA warp
  • Parallel Nsight 2.1 CUDA profiler now analyzes kernel memory activities, execution stalls and instruction throughput
  • Learn more about debugging and performance analysis tools for GPU developers on our CUDA Tools and Ecosystem Summary Page

Advanced Programming Features


  • Access to 3D surfaces and cube maps from device code
  • Enhanced no-copy pinning of system memory, cudaHostRegister() alignment and size restrictions removed
  • Peer-to-peer communication between processes
  • Support for resetting a GPU without rebooting the system in nvidia-smi

New & Improved SDK Code Samples


  • simpleP2P sample now supports peer-to-peer communication with any Fermi GPU
  • New grabcutNPP sample demonstrates interactive foreground extraction using iterated graph cuts (this is really neat!)
  • New samples showing how to implement the Horn-Schunck Method for optical flow, perform volume filtering, and read cube map texture

New in Nvidia Parallel Nsight 2.1 for Visual Studio
Parallel Nsight is a powerful IDE-integration and development tool that allows you to perform the following types of procedures from within Microsoft Visual Studio:

  • Debug CUDA Kernels directly on the GPU hardware
  • Examine (potentially thousands of) threads that are executing in parallel
  • Use on-target conditional breakpoints to locate errors
  • Use the CUDA memory-checker
  • Perform System-Trace activities to review CUDA activities that span your CPU(s) and GPU(s)
  • Perform deep kernel analysis to find performance bottlenecks so you can optimize the code speedup that is possible with CUDA and massively parallel-processed code.
  • Profiling capabilities including advanced experiments to measure memory utilization, instruction throughput, and stall conditions
Some of the new capabilities include:
  • a "warp watch" ability to watch variables and expressions across an entire CUDA warp (a particular level of granularity that is very useful to watch)
  • analyzing kernel memory (alloc/dealloc events, execution stalls, etc)


Summary: CUDA 4.1 Continues Nvidia's Great GPU-Accelerated Application Development Tools Improvements

This latest release of the CUDA Toolkit from Nvidia continues to make life easier for any of us that are into parallel-programming with modern GPUs.  Although GPU-computing can be a bit overwhelming and a requires a different mindset than programming desktop applications or designing a website, if you have an application that can benefit from the power of simultaneous operations, this is a technology worth diving into: it is nothing short of a transformational technology.

Tuesday, October 18, 2011

SVG onload event not firing : Firefox bug / feature with Shortcuts

FireFox not firing SVG onload event

(Windows) Shortcut handling to blame...

I do a fair amount of work with SVG (Scalable Vector Graphics) images / files that contain embedded JavaScript for various event-driven interactive-SVG components. The onload() event, within SVG files, is something I regularly use too. Today I ran into a strange "feature" or "bug" that shows up in Firefox but not in the Google Chrome / Chromium browser — related to this onload event in an SVG document.

I use Chrome as my default browser, especially because I like the included Developer tools a lot, so most initial testing of my web-page HTML, SVG, and JScript code takes place in Chrome before I move onto testing in other browsers (like FireFox). I have been working on my custom SVG RAD Components (tis' what I currently call them), and because I use one particular .SVG file for the main "test rig", I kept a Windows Shortcut on my Windows 7 desktop for a quick reference to that .SVG file.  I just click the shortcut to launch my SVG "application" (in Chrome) or drag the shortcut into a Chrome tab, and that works just fine.  Ah, but not in FF!

FireFox apparently does not resolve shortcut properly if dragged into browser

Being a creature of habit, when I was ready to test my latest SVG file and Javascript code within Firefox (using version 7 currently), I dragged my Windows shortcut (to my SVG file) onto the FF browser and poof... it seemed to load the SVG file, but my onload() event code simply failed to run.  I would have sworn I did this exact same drag-to-load (my SVG) with prior versions of Firefox successfully, but either way, it is not working now.

So, I loaded the page again via drag-and-drop of my Windows-shortcut to my SVG file, but this time with Firebug running (debugger / developer tool).  I quickly see that an error is being generated whereby the event-code referenced in the onload() event was shown as "myOnloadFx is not defined" within the onSVGLoad() code in FireFox (evt=SVGLoad). Clearly something strange was going on here, as this code "works" and has worked in Firefox before.

I played around with the code inside the SVG file a bit, and moved the onload() code from the opening SVG tag's onload="myOnloadFx()" to an inline-script (using <script> tags) just before the SVG's closing tag... and, the problem persisted.  So, what the heck?  After wasting more time on this than I ever should have, I then decided to go to the directory in which the SVG file really existed (vs. using the Windows shortcut to open it), and I dragged the .SVG file onto the Firefox window where it opened fine and ran the onload() event code just as Chrome did.  So, the shortcut-dereferencing/resolution is apparently to blame.


FireFox : want your SVG Javascript onload to fire? Do not open the SVG by dragging a shortcut onto FF


Now I know.  Note: the code executed in my onload() event was in an external Javascript file that is "included" in the SVG by way of code like this: <script type="text/javascript" xlink:href="myExternalSVGcode.js"/>  

Firefox is not properly converting, storing, and subsequently referencing the proper file-locations for included-code like this, but is instead looking in the directory where the shortcut appears (in my case, the desktop).  I confirmed this to be the problem simply by moving a copy of the Javascript (referenced) file onto the desktop along with the Shortcut (.lnk) and voila!  It "fixed" the issue.  UNREAL. 

I have not tested other types of scenarios where this could be a problem, and it is unlikely most people will ever encounter this unless they do software development (and perhaps even just SVG/Javascript with included external javascript files).  But, just in case, I figured I would post my notes here for anyone else that may encounter this weird onload behavior in Firefox.

Tuesday, August 30, 2011

Embarcadero Delphi XE2 New Features of Interest

Embarcadero Delphi XE2 Review of New Features — VERY Interesting Features!

I have been anxiously awaiting the official Embarcadero Delphi XE2 release now that this RAD (Rapid Application Development) IDE and Component-Set are poised to bring about some of the most exciting changes in Windows (and cross-platform!) application development seen in a long time. Yes, Embarcadero (formerly Borland, Inprise, and Codegear branded) Delphi is finally stepping up the application-development game and introducing some exciting technology to address shortcomings in modern MS Windows business-software-applications development.

Although many will cite the most interesting new features as those being support for cross-platform development, Windows 64-bit, Amazon Cloud API, Native iOS support, for me the obvious "killer feature" is the new vector-based UI-development component suite and technology called "FireMonkey". The reason I choose this as the "killer feature" is because 1) I have felt that Windows-UI development has been rather "stale" for years when building *native* (compiled executable applications), and 2) this technology is what certainly makes large portions of other features (like cross-platform development) even possible, and finally 3) I consider this technology capable of building real mission-critical business applications

Delphi XE2 FireMonkey : Vector-based User-Interface Development

Delphi XE2 FireMonkey: Could it be a Disruptive Technology?

FireMonkey is the moniker Embarcadero has applied to their new scalable vector graphics based Graphical User Interface (GUI) framework that is leveraging the capabilities of modern GPUs (Graphics Processing Units) for hardware accelerated cross platform GUI’s. If that sentence did not make it obvious: this is BIG, people!

FireMonkey really could be the disruptive technology we have been waiting for with regards to developing compelling UI's for business applications. It is about time scalable vector-graphics melded with mainstream business applications (and yes, I am aware that Flash and Silverlight are vector-based technologies; I just still do not consider them to be something I want to build any large-scale enterprise applications with).

If Embarcadero is successful at marketing this vision of how modern UI's should be developed, we could be on the cusp of a huge shift in *native* application UI technology (note: I consider HTML5/JS advancements also disruptive, but in a different way and for a somewhat different target-market).

So, vector-based User Interfaces (UIs) are soon to be a reality for Delphi developers and the software applications they create (and therefore Microsoft Windows environments), but FireMonkey holds even more promise than simply modernizing our UIs — this technology is what will allow resulting UIs to now be cross-platform capable. FireMonkey provides UI elements that will ultimately look the same across the various deployment targets: 32-bit Windows applications for Windows 7, Windows Vista and XP and Server OS's... 64-bit Windows applications for Windows 7, Windows Vista and XP; Server 2003 and 2008... and even Apple OS X 10.6 and 10.7 applications and iOS 4.2+ applications. What? Apple? Since that cross-platform development tool news is going to be of substantial interest to many people, I will discuss that later in this blog;likewise you have taken note of the 64-bit reference which will also be discussed in more detail.

Delphi XE2 FireMonkey: Where did it Come From?

FireMonkey is based on VGScene/DXScene, which was created by KSDev (Eugene A. Kryukov), and then purchased by Embarcadero quite recently (late 2010 - early 2011). KSDev sold VG/DXScene as a VCL component package prior to the acquisition, and KSDev's final release on 1/13/2011was considered "feature complete".

KSDev marketed their components as: "a Delphi VCL library for WPF-like framework with advanced controls, styles, graphics and effects", with the core functionality being built around a powerful vector-engine (similar in concept to how Adobe Flash works) with modern features like real-time anti-aliased vector graphics, resolution independence, alpha blending, gradients and special visual filling,etc.

I actually looked into using DXScene / VGScene back over a year ago when I found myself, once again, thinking how outdated my Delphi GUI applications looked and how utterly annoying I found it that native executable Windows applications (in general), and the UI elements that made up these GUIs, did not SCALE easily when switching between various screen-sizes and pixel-densities, and that there was no easy way to give my applications the refinements to look more "modern" without purchasing a bunch of third-party controls. And, purchasing third-party controls to address the issue of modern "look" still did not resolve the issues with easy scaling of the UI-elements.

After looking at the KSDev stuff, I actually opted not to embrace their components for a few reasons. First and foremost (at the time) I considered them to be a high-risk "niche" component-set that I was unwilling to risk building mainstream applications for my customers with. I have seen all too many Delphi VCL component sets wither (think Rave Reports) and/or completely die-off over the years, and even if source-code is available, many component sets are just so specialized that it would take far too great of an investment to continue to use them in the event the developer "gave up" on them or failed to produce necessary bug-fixes and so on. KSDev had a neat thing going with their components, and thankfully Embarcadero has picked up that work and provided the credibility and reassurance I need to actually implement business-applications using that technology now in Delphi XE2!

Delphi XE2 FireMonkey: Is it Like X, Y, or Z?

This cross-platform application framework uses GPU-accelerated vector graphics to render UI elements, D2D/D3D (Direct-2D / Direct-3D) on Windows, and OpenGL on OSX. You can think of it as similar to Silverlight OOB or Jupiter (the new “application model” for Windows 8), or even Adobe Flash. When I consider the goals of Microsoft's "Jupiter", I have to wonder if perhaps FireMonkey is essentially the same thing... here is what a ZDNet article from early 2011 described Jupiter as:
Jupiter is going to be a new user interface (UI) library for Windows, built alongside Windows 8. It will be a thin XAML/UI layer on top of Windows application programming interfaces and frameworks for subsystems like graphics, text and input. The idea is Jupiter will bring support for smoother and more fluid animation, rich typography, and new media capabilities to Windows 8 devices.
Hmmmmm... sure sounds quite similar with regards to the end-result (the UI people see), though thankfully the FireMonkey implementation is not a pile of XAML and over-complexity that Microsoft always seems to come up with.

Instead, FireMonkey uses the familiar Delphi (object pascal) language and VCL (Visual Component Library) paradigm for its implementation, and compiles to native code. To me, being able to work with FireMonkey is just like working with any other VCL components, and honestly anything that keeps my from having to learn yet another Microsoft UI-technology-of-the-day is a plus (I just can not deal with XAML).

FireMonkey: Will Microsoft FUD Bury it Before it Takes Hold?

At least part of me is concerned that Microsoft will somehow work its usual FUD (Fear, Uncertainty, and Doubt) campaign against Embarcadero (with regards to FireMonkey) as they work feverishly to bring their own "Jupiter" vision and Windows-8 to market.

For all you LONG-TIME Delphi developers, do you remember how the Visual Basic vs. Delphi thing played out over a decade or more? Clearly Borland was (what should have been) light-years ahead in the RAD IDE and component-based Windows development tools/language space (especially in OOP that people could understand; unlike C++ which dominated mainstream "real" Windows apps prior to Delphi), but Microsoft worked very hard to convince developers (and corporate management) that investing in Delphi was a bad move... that Delphi was too risky,.. all the while working to "catch up" with Visual Basic and push that as the "solution" to corporate UI-development needs. I have used Delphi since version 2, and the fact is, Microsoft was not even remotely close to having anything as capable until perhaps the days of Delphi 2006.

As we all know in retrospect, Microsoft's strategy worked in a BIG way and only after burying Delphi and relegating it to the niche-market they fabricated through FUD did MS create a semi-decent (though wildly bloated) component set of DotNet and the reasonably nice C# language (which is clearly based on Delphi to some extent). I am concerned that somehow Microsoft will wage such a war again if they decide Embarcadero is a "threat"; or, do they even need to?

The fact is, Microsoft's decade+ campaign of marginalizing otherwise promising, and even superior, development languages and technologies has been so successful that Embarcadero has a monumental task ahead of it: convincing mainstream corporate developers to actually embrace this technology. Good luck with that!

There are so many "competing" priorities pulling at corporate IT-folks and budgets that I see this as a battle that is going the be VERY difficult to win without some serious willingness to put some flesh on the line and suck up some losses while doing "a Microsoft" and dumping the product out there en masse, and even at a potential loss, to foster widespread adoption so as to gain the all-important "critical mass" necessary to propel the product forward and create a self-sustaining win vs. a self-fulfilling-prophetic-loss situation (for lack of developer density, etc).

FireMonkey is also up against HTML5/JS hype, up against Silverlight/Flash and the forthcoming "Jupiter", and so many other competing technologies... how is it going to gain traction? When we (developers) search sites like Dice.com and see essentially ZERO postings for Delphi developer jobs (compared to oodles of C# or Silverlight or HTML5/JS jobs), what are we to do? 

Embarcadero best be thinking long-term and be willing to take a bit of "a hit" (financially) to gain a foothold, or the simple fact is: the niftiest technology in years for UI development may make little difference to market penetration and adoption. Get your marketing/sales team (especially the latter; since "marketing" is sales without responsibility for producing revenue) ramped up NOW Embarcadero, and have them start working some serious deals with software developers to get them to use this product! OK, enough said... on to more about this tech...

FireMonkey: Embrace it, and Embrace Changes to Your Existing Applications

FireMonkey is an entirely new framework for UI-development, and as such, it is incompatible with your current/traditional VCL-based UIs and you are not going to be having co-existence in the same application (i.e., if you want to port an existing application UI to the new FireMonkey technology, you will have to rewrite your GUI code).

This sure sounds a bit overwhelming, but I really think this gutsy move by Embarcadero is what will actually give FireMonkey a fighting chance — the technology is not encumbered by the burden of legacy support! This makes the implementation MUCH cleaner — we all can attest to how much we welcome the opportunity to write an application or component "from scratch" as compared to modifying a many-revision-old, widely used (and thus many possibilities to "break" something), piece of code. I expect this new code to be architecturally solid and much more ideal thanks to separating it from the older UI-VCL components.

FireMonkey Components are Containers

You will also have to get used to a bit of a paradigm shift with regard to how components are assembled, and I think it is another shift that is for the better and about time: FireMonkey components are all containers, meaning you can embed any component inside any other component. When you think about it, this makes total sense.

Something as simple as a button-component is composed by assembling 9 components that, when put together, produce what looks and behaves as a Button should. A FireMonkey Button consists of: a TLayout component to organize all control layout, (3) TRectangles for border, background and foreground color, a Label represents for the Button text, and then a group of four additional components (two each, for animation and effects).

The animations are going to give us the visual mouse over/out on the button (like we are used to seeing on websites for years), and the effects can occur on events like button-press, onfocus, etc and make even niftier things like "glow" effects happen and so on. This type of animation/effects ability is present throughout all of FireMonkey components thanks to the way these containers and component-buildups can be implemented.  I look forward to using this to "modernize" the look and feel of my applications, though we all need to keep in mind that this could be over-used quite easily.

You may also want to think about how to standardize the look/feel of your application elements, and thankfully FireMonkey implements what is a parallel to CSS Styles through their own FireMonkey "Styles". I am not yet sure how far these Styles can be pushed, but I am hopeful this first version is good enough for most things. I think about how CSS has gone through a lot of change as we push into CSS3 now, and I wonder if future iterations of Delphi XE3, XE4, etc will be extending the power of their own Styles just like how CSS keeps growing its abilities.

In some regards, these Delphi FireMonkey styles are quite a bit more advanced: you can implement things like blurs, animations, and so forth, via styles. Again I have some concern about pushing UI-glitz TOO far, but, no matter what, Styles should make standardizing and quickly updating the look-and-feel of applications a LOT easier!

Perhaps FireMonkey applications will be the advertising-force Embarcadero needs to gain further recognition: when users and developers start seeing native applications that are simply stunning, they may start to ask "what is that written in?" This could be a positive thing, but I also can imagine some applications getting so ridiculous with animating every last aspect of the UI that, when that previous question is asked, it will be with a bit of disdain or ridicule.

Hopefully we all use this power wisely :)


What about Non-Visual Components?

You may already be thinking: what about all my VCL components I use like TList, TStringList, etc.

Have no fear: these non-visual components will remain the same as what you are used to and will also be usable from your FireMonkey-based-UI applications. The fact is, if you have already done a decent job of separating your UI-implementations from the underlying event-code, database-interaction, and such, you may not have TOO difficult of a time updating your applications.

You are not going to have "data-aware" components like TDBMemo anymore under FireMonkey, but of course there is an alternative way of going about this. The new "LiveBindings" within the FireMonkey framework allow you to connect any type of data to any UI or graphical element in VCL / FireMonkey; consider this a mechanism for creating "live" relationships between objects and also between individual properties of objects. It has some serious potential!

I am excited by this feature, and look forward to seeing how far I can push these live-interrelationships. LiveBindings are going to allow you to do thing you can not do with existing data-aware controls too. And, LiveBindings are *not* just limited to FireMonkey controls (i.e., there is support for this technology in the "old" style VCL too as part of Delphi XE2 updates). You will be able to do things like bind the "Caption" property of a TLabel to the Field-values in a dataset (or the column-name, etc), and much more.

Since the "bindings" are accomplished using an expression-engine (vs. just simple hard-coded bindings) you can bind on evaluated-values. E.g., bind your label control's caption to an expression like TDBColumn.DisplayName + " column value is:" + dataset.field.valueAsString (pseudo-code used for example). You get the idea. It really is powerful.

But, that is not all... If you choose, you can implement bi-directional property-to-property bindings (which, sure sound like data-aware-like functionality). This bi-directionality implies something somewhat profound: it should be possible to consolidate UI-element-frameworks to no longer require those TDB...versions of each control (i.e., remove the need for "data-aware" versions of each control), since something like a Label can be bi-directionally "bound" and suddenly be that data-aware-control.

This is going to take some hands-on experience to get used to, but it is a significant step forward (and, should bring writing "data-aware" custom controls into the realm and reach of many more developers; I say this because I have always found writing TDBxyz data-aware custom components WAY too difficult!).

Note: I have read that the expression engine used by LiveBindings is available to us in our programs to evaluate any ObjectPascal expression dynamically at runtime; this should make for some interesting neat applications too!

Cross-Platform Native Applications using Delphi

OK, this topic certainly deserves some attention, especially from all of us that can still remember the days of Kylix, which was a nifty idea but one that failed miserably for all sorts of reasons (one being the simple fact it was not maintained at all after early releases). Well, with that memory pushed aside, let's think about the prospects of true cross-platform NATIVE-code deployment again.

As mentioned earlier, FireMonkey provides UI elements that will ultimately look the same across the various deployment targets: 32-bit Windows applications for Windows 7, Windows Vista and XP and Server OS's... 64-bit Windows applications for Windows 7, Windows Vista and XP; Server 2003 and 2008... and even Apple OS X 10.6 and 10.7 applications and iOS 4.2+ applications. In addition, there is some speculation that Android support and Linux will be forthcoming soon after the release of Delphi XE2 (hopefully as a free update!!)

The IDE Runs Only On Windows : but, you can deploy to other targets

It is not surprising that the Delphi XE2 RAD IDE runs only on Windows, though part of me wonders if Embarcadero will get around to converting the IDE to be FireMonkey-based (if even remotely possible?) and make the IDE run on any target-platform. Regardless, for now it is Windows only (as it always has been; aside from Kylix), and we developers will have to go through a few extra steps to compile and deploy applications to the Apple targets.

Delphi for Apple OSX/iOS

From what I have gathered via online discussions (I have not tested the Apple deployment stuff at all, nor do I have much initial concern for it even though long-term I expect to support Apple targets), Embarcadero / Delphi is apparently relying on the FreePascal Compiler (FPC) to compile code for deployment to other (non-Windows) target operating-systems. 

The FPC compiler will use the same source-code you have written for your Windows-based applications (that Delphi's compiler used to generate Windows binaries) and the FPC will generate binaries (i.e., native apps) that can be run an Apple / Mac computer and/or iOS device (i.e., single collection of source code yields multiple-platform-specific binaries, thanks to some FPC help).

There are also significant limitations with what all can be simply recompiled and deployed to the Mac. I am under the impression that outside of FireMonkey, substantial portions of the VCL will not be available on the Mac yet (I may be wrong). And, I really can not imagine some things EVER being supported on the iOS/OSX platform (especially some of the "native" database-access stuff).

You will certainly need to use FireMonkey for any UI you plan to have run on the Apple side of things, but in addition, I suspect there will be all sorts of other caveats regarding what will and will not "port" directly simply via a recompilation. Again, I see the Apple thing as a longer-term possibility for me. I'd be more intrigued with Linux deployment immediately (since I have Linux running in a Virtual Machine or two). Time will tell. I look forward to seeing what people are able to achieve on the Apple platform with Delphi XE2.

Delphi Applications for Cloud / Hosted Scenarios

I came across a sentence online somewhere stating that "Delphi and C++ applications can be deployed to Amazon EC2 and Windows Azure, with support for Amazon Simple Storage Service API, Queue Service, and SimpleDB." This is interesting, but I really do not know exactly what was needed to support this, and so far, I have not had the need for this.


Delphi XE2 64-Bit Support

Native 64-bit Windows applications are something that quite a few Delphi developers have clamored for over the past couple years, and apparently they are getting their wishes fulfilled. Delphi XE2 is to include support for Windows 64-bit machines, including a debugger and deployment manager. And, it looks rather easy to deploy an application as a 64-bit applicaition.

In the Delphi XE2 Project Explorer you will see a new node under each project where you can choose your "Target Platforms". By default, your existing projects are going to have a "target platform" entry that is ideal for deploying to 32-bit Windows. Next, you can add your new 64-bit Windows platform target node to the tree, select it, recompile, and voila!, you have a 64-bit executable.

I do not know how many developers really "need" 64-bit capabilities (like is necessary for addressing very large blocks of memory or working with 64-bit integers, etc), but now the capability is there and Delphi need not be considered lacking in this regard. You will have to do some (most likely) minor "code review" to make sure you to not have any code that is for some reason only 32-bit-safe; e.g., you are doing some bit-level manipulation shifting bits around in INTs, doing crazy things with pointers, etc. I do not expect most people will have significant work to do in this regard prior to 64-bit compiler and deployment use.


Delphi Reporting Components Update : Finally!
Goodbye Rave Reports (Junk!)

OK, I could not obtain 100% confirmation of this quite yet, but rumor has it that Delphi XE2 will include FastReport VCL 4 RAD Edition reporting tool — whether true or not, the fact is I refuse to invest ANY more time using Rave Reports (what a buggy pile of @#!@ that is an embarrassment that needed addressed; Nevrona's pathetic "support" and glacial pace of resolving any issues and bugs caused me and many others to become utterly fed up with the product and move elsewhere).

FastReports surely has to be a better option by a long-shot, as it is actively maintained and developed. Compare that to how Nevrona can not even update their *website* for years on end. I can not believe how long it took Embarcadero to move past Rave Reports... perhaps they made the stupid move (or Borland did) of signing some longer-term contract with Nevrona without any sort of "out" for them not meeting certain quality criteria, support criteria, etc. Who knows. But, I am excited by the prospect of having a good reporting tool (by default) included with Delphi!


Other New Features in Delphi XE2 Worth Noting

More details will emerge quite soon. In fact, I am supposed to listen to a Webinar about the product-launch tomorrow hopefully a near-term Delphi XE2 release-date is to be announced. And, I also hope the RTM (i.e., final, release-ready) version of Delphi XE2 is truly "ready" and not full of a bunch of annoying bugs.

My guess is that like most recent releases, there will be an update-pack available for it nearly as soon as it is officially "released"; hopefully it is solid enough to be truly prime-time ready. I have quite a few Delphi applications I want to update to take advantage of these new features ASAP.

I am not a big "DataSnap" user, but this release is supposed to have a fair amount of updates to that functionality. There are components and functionality related to that new "Cloud" stuff like TAzureQueueManagement, Amazon Simple Storage Service API, Amazon Queue Service API, Amazon SimpleDB API, and so on. I think the Documentation Insight is new (Delphi XML documentation tool) too.

Either way, there is a LOT of new stuff packed into this XE2 release, as already discussed. To me, the FireMonkey stuff alone is a HUGE chunk of functionality and makes me quite eager to start building some fantastic XE2-based applications leveraging these features.


Delphi XE2 — CONCLUSION: Enterprise Applications are Poised for a Major Update

As reported in this brief SD Times article and interview, Michael Swindell, senior vice president of product management for Embarcadero, seems to be clearly positioning Delphi XE2 and FireMonkey where I see it making the most sense: business applications:
“We know where we should be going with the experience of non-entertainment applications,” he said. [in reference to the fact that FireMonkey ships with about 200 user-interface controls that include GPU-powered scalable vector and 3D effects] 
[...] 
Swindell emphasized that FireMonkey is focused on heavy-duty business applications—not entertainment or advertising sectors, where rich Internet applications already are strong. To that end, FireMonkey introduces a feature called Live Binding, which lets developers bind any UI control or graphical element to any data source, he said. Native CPU application execution and data access allow FireMonkey applications to perform at a very high level, he added. 
[...] 
We saw this as a gap and as where applications need to go,” Swindell said. “Companies continue coming out with 1990s-style Windows Forms applications and rolling their own frameworks. There hadn’t been anything out of the box to get [developers] there quickly and with a lot of power.”
I could not agree more with that concluding quite about the "gap" that existed. Being a business software developer, I am ready to address that gap and use Delphi XE2 to do so.

Here's hoping Delphi XE2 / FireMonkey gains some widespread adoption and ushers in an age of resurgence in Delphi software development!

Wednesday, April 27, 2011

VariCAD 2011 : 2D & 3D CAD Software Update Release

If you are a CAD (computer-aided-design) hobbyist or professional, you will be pleased to know that VariCAD has just released another update to their very capable and affordable 2D (two dimensional) and 3D (three dimensional) CAD Software program.  I fall into the former group (hobbyist and self-declared "inventor"), and I really enjoy being able to create 3-dimensional models of my various "inventions" and ideas.

When I shopped around for *reasonably* priced CAD software with powerful 3D-modeling features, VariCAD was the one offering that really stood out (read my prior Blog where I review VariCAD and discuss my trials of competing free and affordable 3D CAD products) to me after evaluating quite a few competing CAD products — so I bought it.
2011-04-18 — VariCAD 2011 1.x RELEASE SUMMARY: this latest releae includes numerous improvements to the 2D drawing functionality portion of VariCAD. This latest VariCAD 2011 release contains rebuilt and significantly better DWG/DXF interface, new or changed 2D functions, better support of 2D NURBS objects (splines), significantly improved user interface and improved STEP input. Available for Linux and Windows platforms (32bit and 64bit also).

Pen and graph-paper can only take you so far when planning out even moderately complex engineering or product-design projects, and the time-savings and accuracy that is obtainable with software like VariCAD can easily make the price well worth the investment.

VariCAD Example : Real-Life Time/Cost Savings
One of my more recent VariCAD 3D-layout projects that saved me a TON of time was, of all things, designing an optimal heating and air-conditioning (HVAC) ductwork system for a planned new geothermal system install at my residence.  I started out taking measurements with a tape measure and then sketching some ideas on paper, but I was soon overwhelmed with overlapping duct trunk-lines that had all sorts of multi-dimensional constraints on their placement (especially due to immovable walls); it quickly became apparent that pencil and paper could not ensure success on this project — VariCAD to the rescue!

Geothermal Ductwork Planning in 3D

As the above VariCAD 3D scene rendering from one view/perspective demonstrates, this was a rather complex ducting plan to implement, and the color-coding of various trunk lines in VariCAD (for supply air and return air that would feed out and into the 6-ton WaterFurnace geothermal unit) saved the day and kept me from going insane as I tried to visualize how this would all work.  I also placed a couple gray slabs into the drawing to remind myself of where I was passing through immovable walls, and this really helped with the visualization process too.

This 3-D CAD software mockup of how the proposed geothermal ducting could be implemented gave me the confidence I needed to know that if the project moved forward, it would work as planned — this is a confidence I could not have otherwise had, as the scene was just too darn complex to visualize with simple hand-drawn orthographic sketches or *attempts* at 3D.  This model was created to-scale, and everything that worked in the CAD software would work in reality because of the accuracy the CAD software brought to the problem at hand.  The bottom line here is that a $600 piece of software avoided what could have been countless hours of expensive reworking of ductwork "on the fly" from a lack of proper planning.

So, if you have any desire to expand your abilities as a home-improvement expert or hobbyist, perhaps the latest VariCAD 2D/3D CAD software (which has free 30-day trial) may be for you also.  I found the software INCREDIBLY USEFUL as an individual that wants to ensure the best outcome for home-improvement projects, and I can only imagine how wonderful a tool like this would be to professional ductwork installers (especially where re-configuring existing ductworks for geothermal, etc. is happening).  I can see this being used as a marketing tool also, so customers can visualize what it is you (the contractor) brings to the table and plans to implement.

Geothermal ductwork-planning in 3D is just one of the many uses I have found for this software. VariCAD 2011 furthers my appreciation for the cost-savings and time-savings that design-automation software can offer, and I have many more projects in the works that I am and will be using VariCAD to perform 2D and 3D CAD layout for.  I find myself quickly going straight to the computer when I have ideas to "sketch out" now, thus bypassing paper/pen altogether.   Hopefully you find this software to be equally useful and valuable.

UPDATE (2013): I have continued to use VariCAD 2012 and VariCAD 2013 and love the product.  Additional features and enhancements have come along to make an already great product even better.  Give it a try, and if it just "clicks" with you, productivity is bound to follow.

Continue to read this Software Development and Technology Blog for computer programming articles (including useful free / OSS source-code and algorithms), software development insights, and technology Techniques, How-To's, Fixes, Reviews, and News — focused on Dart Language, SQL Server, Delphi, Nvidia CUDA, VMware, TypeScript, SVG, other technology tips and how-to's, plus my varied political and economic opinions.

Friday, August 13, 2010

Nvidia CUDA Toolkit 3.1 - with Fermi card optimizations

The latest Nvidia "Fermi" GPUs (Graphical Processing Units) are making their way to the stores now by way of the latest Nvidia Graphics Cards that are definitely worth a look if it has been over a year since you upgraded your graphics card - the processing power per watt now is just unbelievable!  And, the latest Nvidia CUDA Toolkit 3.1 release has some features specific to the new Fermi cards and architecture that you may want to check into; just in case you are into GPU programming for fun.

Nvidia (NASDAQ:NVDA) has moved to a modern 40nm architecture for these new GPUs, which has allowed them to be much more power-efficient while cranking out tons of graphics horsepower for gaming and/or professional applications that make use of their stream-processors (aka, "CUDA cores") on the graphics card for high-performance computing (HPC) via massively-parallel-processed algorithms.  CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit) for applications including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.

Get your NVidia Fermi Graphics Card
First, get hold of a new Fermi-based Nvidia CUDA Graphics card to develop and run your new CUDA applications on.  There are some really great cards out now that offer some really nice punch for the buck (aka, "price-to-performance ratio"), including these:
  • Nvidia Geforce GTX 460 - a very reasonably priced (~ $200.00) super-powerful mainstream / desktop graphics card (targets gamers mainly) that smokes every other card on the market in this price range.  This card offers 336 CUDA processing cores and a Gigabyte of RAM to run your new Nvidia CUDA Toolkit 3.1 applications on.
  • The brand new professional-class NVidia Quadro 4000 (NOT to be confused with the old Quadro FX 4000!) -- this ~$1000 card has 256 CUDA cores coupled to 2GB of GDDR5 RAM and is well suited to apps like CAD, Photoshop CS4 / CS5, and other CUDA-enabled professional apps. The card is quite power-efficient at only 142 watts max.
Now you can start putting some new CUDA abilities to work...

Nvidia CUDA Toolkit 3.1 Release Highlights
  • GPUDirect(tm) gives 3rd party devices direct access to CUDA Memory
  • Support for 16-way concurrency allows up to 16 different kernels to run at the same time on Fermi architecture GPUs
  • Runtime / Driver interoperability enables applications to mix-n-match use of the CUDA Driver API with CUDA C Runtim and math libraries via buffer sharing and context migration
  • New language features added to CUDA C / C++ include:
    • Support for printf() in device code
    • Support for function pointers and recursion make it easier to port many existing algorithms to Fermi GPUs
  • Unified Visual Profiler now supports both CUDA C/C++ and OpenCL, and now includes support for CUDA Driver API tracing
  • Math Libraries Performance Improvements, including:
    • Improved performance of selected transcendental functions from the log, pow, erf, and gamma families
    • Significant improvements in double-precision FFT performance on Fermi-architecture GPUs for 2^n transform sizes
    • Streaming API now supported in CUBLAS for overlapping copy and compute operations
    • CUFFT Real-to-complex (R2C) and complex-to-real (C2R) optimizations for 2^n data sizes
    • Improved performance for GEMV and SYMV subroutines in CUBLAS
    • Optimized double-precision implementations of divide and reciprocal routines for the Fermi architecture
  • New and updated SDK code samples demonstrating how to use:
    • Function pointers in CUDA C/C++ kernels
    • OpenCL / Direct3D buffer sharing
    • Hidden Markov Model in OpenCL
    • Microsoft Excel GPGPU example showing how to run an Excel function on the GPU


Financial Opportunities - Nvidia (NASDAQ:NVDA) stock?
Since this blog also focuses on stock-market and investing opportunities, I have to contemplate whether the new Nvidia Fermi cards are going to drive substantial sales/revenue-gains and associated profit-gains for Nvidia corporation.  I can not help thinking that it is inevitable, especially when so many of the online retailers I went to in search of a new Nvidia GTX 460 card from were out of stock, backordered, and so forth.

And, these cards are out there already... people lucky enough to have gotten hold of them already are essentially uniformly impressed and satisfied with the performance of the GTX 460 card.  I have read all sorts of reviews from buyers saying how these cards have set a new standard in desktop gaming performance (frame-rates, etc) while also being rather reasonable in their power consumption.  Nvidia allows for running two cards together (in SLI-mode) for even higher performance, and from all the tests and reviews I have read: wow... these are FAST!

So, it seems to be nearly a guarantee that Nvidia is going to move a LOT of these cards.  The question is: at what margin?  They are being VERY competitive and aggressive with their pricing model, which suggests that margins may not be TOO large, but I do not know.  I will assume they are being sold for a profit, and that with enough volume, their margins will also be pretty decent.

And, then there is the super-computing and professional market: THAT is what I am more interested in from an investing standpoint.  These cards are being used in the top of the line supercomputers and high-performance computing systems and clusters, where a single super-computer may use 100s or 1000s of these cards in it.  And, Nvidia's top Quadro 6000 graphics card lists for $6,000 -- targetting digital production firms (think: Adobe Photoshop and Premier e.g.) and engineering firms doing real-time 3D work and the like.  These firms WILL buy the new Fermi-based cards in order to gain efficiencies at their firms (since these cards are up to 8-times faster than the prior generation; meaning: much time saved when rendering, etc).

Sure, the economy is "slow" right now, but what better way for companies to gain efficiency for a reasonable sum?  Move some processing off to new super-powered Nvidia GPUs!  If your employees spend less time waiting for computing operations to complete, perhaps you can get by with less employees (note: none of us like the sound of that, but it IS what helps drive "productivity"'; I'd just prefer seeing and freed-up employee time being redirected toward more creativity and product design and improvement, etc).

Bottom line: NVIDIA HAS SOME AWESOME GRAPHICS CARDS TO CONSIDER, and some updated tools to go with them!