Sunday, December 18, 2011

Nvidia CUDA Toolkit 4.1 and Parallel Nsight 2.1

Nvidia's CUDA technology has been around for 5 years now, and only 6 months ago blogged about Nvidia's CUDA technology when the CUDA Toolkit 4.0 was released.  Nvidia is keeping up the pace of innovation with a substantive upgrade to both Nvidia CUDA Toolkit (with version 4.1) and Parallel Nsight (now at version 2.1).  

CUDA: What is it?


CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit) for applications including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.  The current Nvidia (NASDAQ:NVDA) "Fermi" line of GPUs (Graphical Processing Units) provides incredibly powerful parallel computing within reach of most individual users and businesses through rather affordable Nvidia Graphics Cards (and, the upcoming Nvidia "Kepler" GPUs for early 2012 will only be better, faster, and more efficient).

Note: many of these latest CUDA features require a "Fermi"-based GPU (and, using the LLVM-based compiler does).  These cards are worth investing in if you plan to do any CUDA development.  You can get a Fermi-based CUDA-Capable Graphics Card that is quite affordable and power-efficient: I rather like my Quadro 600 card (~$160.00) which uses only 40W for 96 CUDA-processing cores; this card has been very capable for running all my development work on.

New in Nvidia CUDA Toolkit 4.1

LLVM Compiler / Toolchain Support

Nvidia CUDA Toolkit 4.1 now includes a new LLVM-based CUDA compiler along with over 1000 new image processing functions, plus a redesigned Visual Profiler.  Integrating the open source Low Level Virtual Machine (LLVM) toolchain support definitely has my attention (LLVM is a collection of modular and reusable compiler and toolchain technologies).

The first notable benefit of the LLVM compiler is that Nvidia claims this compiler delivers up to 10% faster performance for many applications (compared to their prior in-house developed C/C++ compiler).

But, what strikes me as the most (potentially) important aspect of this move to LLVM is that we could potentially soon see more (programming) language support for using CUDA outside of just C/C++ and/or additional CPU support.  Nvidia has apparently used the Clang C and C++ compilers within the LLVM framework and has hooked in support for the CUDA parallel development environment.

Although Nvidia's (CUDA C and CUDA C++) compiler modifications are not open-sourced, LLVM will provide a foundation for more easily adding language/processor support.  Given Apple's use of LLVM on ARM (platform), I have to wonder if this is going to be a build-target in the not too distant future.  There are also open-source projects for other programming languages to make use of the LLVM toolchain, so the potential does exist for accessing CUDA / GPU-support from other domain-specific languages eventually (perhaps Java, Python, etc) directly.

Other Major New Features in CUDA Toolkit 4.1
(from Nvidia website, with some added comments and details)

New & Improved “Drop-In” Acceleration With GPU-Accelerated Libraries

  • Over 1000 new image processing functions in the NPP (Nvidia Performance Primitives) library — this brings to total number of NPP functions to 2200+. These GPU-accelerated functions (building blocks) for image and signal processing include capabilities geared toward arithmetic, logic, conversion, statistics, filters, and more; also, these can execute on the GPU at up to 40x (yes, 40 times!) the speed of Intel IPP (Integrated Performance Primitives).  This is great for media, entertainment, and visual processing applications.
  • New Boost style placeholders in Thrust CUDA C++ template library which allow inline functors now.  Thrust includes optimized functions for sort, reduce, scan operations and so on.
  • New cuSPARSE tri-diagonal solver up to 10x faster than MKL on a 6 core CPU; this also includes up to 2x faster sparse matrix vector multiplication using ELL hybrid format 
  • New support in cuRAND for MRG32k3a and Mersenne Twister (MTGP11213) RNG algorithms 
  • Bessel functions now supported in the CUDA standard Math library 
  • CuFFT (Fast Fourier Transforms) library has a thread-safe API now (callable from multiple host-threads); also, substantial improvements in speed!
  • CuBLAS level 3 performance improvements up to 6X over Intel MKL (Math Kernel Library)
  • Batched-GEMM API for more efficient processing of many small matrices (i.e., 4x4 through 128x128 matrices; up to 4X speedup over MKL); up to 1 TFLOPS sustained performance (yes, a teraflop!  Wow)
  • Average and rounded-average functions (e.g., hadd / rhadd - signed and unsigned)

Enhanced & Redesigned Developer Tools (On Windows, Mac, & Linux)

  • Redesigned Visual Profiler with automated performance analysis and expert guidance (guided workflow and drill-down expert guidance); during an online presentation, this was described as "almost like having an Nvidia engineer in a box", which sure sounds handy!  You should benefit from the experience of those engineers, and be helped along through attaining best-practice outcomes with these built-in automated analyses/experts.
  • Assert() in device code - helpful for debugging!
  • CUDA_GDB support for multi-context debugging and assert() in device code
  • CUDA-MEMCHECK now detects out of bounds access for memory allocated in device code
  • Parallel Nsight 2.1 CUDA warp watch visualizes variables and expressions across an entire CUDA warp
  • Parallel Nsight 2.1 CUDA profiler now analyzes kernel memory activities, execution stalls and instruction throughput
  • Learn more about debugging and performance analysis tools for GPU developers on our CUDA Tools and Ecosystem Summary Page

Advanced Programming Features


  • Access to 3D surfaces and cube maps from device code
  • Enhanced no-copy pinning of system memory, cudaHostRegister() alignment and size restrictions removed
  • Peer-to-peer communication between processes
  • Support for resetting a GPU without rebooting the system in nvidia-smi

New & Improved SDK Code Samples


  • simpleP2P sample now supports peer-to-peer communication with any Fermi GPU
  • New grabcutNPP sample demonstrates interactive foreground extraction using iterated graph cuts (this is really neat!)
  • New samples showing how to implement the Horn-Schunck Method for optical flow, perform volume filtering, and read cube map texture

New in Nvidia Parallel Nsight 2.1 for Visual Studio
Parallel Nsight is a powerful IDE-integration and development tool that allows you to perform the following types of procedures from within Microsoft Visual Studio:

  • Debug CUDA Kernels directly on the GPU hardware
  • Examine (potentially thousands of) threads that are executing in parallel
  • Use on-target conditional breakpoints to locate errors
  • Use the CUDA memory-checker
  • Perform System-Trace activities to review CUDA activities that span your CPU(s) and GPU(s)
  • Perform deep kernel analysis to find performance bottlenecks so you can optimize the code speedup that is possible with CUDA and massively parallel-processed code.
  • Profiling capabilities including advanced experiments to measure memory utilization, instruction throughput, and stall conditions
Some of the new capabilities include:
  • a "warp watch" ability to watch variables and expressions across an entire CUDA warp (a particular level of granularity that is very useful to watch)
  • analyzing kernel memory (alloc/dealloc events, execution stalls, etc)


Summary: CUDA 4.1 Continues Nvidia's Great GPU-Accelerated Application Development Tools Improvements

This latest release of the CUDA Toolkit from Nvidia continues to make life easier for any of us that are into parallel-programming with modern GPUs.  Although GPU-computing can be a bit overwhelming and a requires a different mindset than programming desktop applications or designing a website, if you have an application that can benefit from the power of simultaneous operations, this is a technology worth diving into: it is nothing short of a transformational technology.

Wednesday, December 14, 2011

VariCAD 2012 Sale : 3D CAD Software that is Affordable

VariCAD 2012 CAD Software Sale ends December 28th, 2011

If you are in the market for an affordable 2D and 3D CAD software package for personal, hobby, or professional use, give VariCAD 2012 a look now while VariCAD is offering a 20% off sale through December 28, 2012, for either the Linux or Windows version, and for versions including one-year of upgrades and support.

This super-affordable software (especially compared to the "big name" 3D CAD packages like AutoCad and SolidWorks), is very robust and still simple enough to use that a novice can pick it up in little time (like I did). It can produce some fantastic 3D renderings too (from their samples gallery):
Step Tools

 

As you can see, it can be quite useful for prototyping mechanical objects and personal inventions visually. I tend to focus on the 3D-aspects of the software (because that is how I think about the objects I am creating), but the 2D CAD features are also plenty impressive also.


My Own VariCAD Use / Experience

I have been much more productive putting my product-ideas down "on the computer" instead of "on paper" (where they tended to be illegible a few weeks later).

My visual-brainstorming has led to some interesting creations (in wood, metal, and carbon-fiber) that I would otherwise not have attempted building -- especially the carbon-fiber inventions, since the material is expensive and I do not want to waste any! The 3D modeling in VariCAD lets me see how parts interact BEFORE I commit to building something (and, I can tweak my design without wasting materials).

In a prior blog about VariCAD, I posted a rendering of some 3D HVAC-ducting layout that I created with VariCAD on Windows -- this design visualization helped me a great deal with what was a complex geothermal heating-system upgrade where new trunk-lines had to be installed in an existing home while working around various obstacles.

Note: you can always try the software (before buying)... just download the trial from VariCAD.com; if you like it, save some money and buy a license while it is on sale!

Wednesday, November 30, 2011

Installing Redis Database as Windows Service (Redis.io DB) : Issues and Workarounds

Run Redis.io DB as Windows Service

I recently started experimenting with Redis database (a NoSQL DB) as an alternative to SQL-Server for certain development requirements.
"Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets."
Since I do most of my development under Microsoft Windows, I was hoping to run my Redis instance on my Windows7 x64 Pro development desktop; more specifically, I wanted to run Redis as a Windows service.

Since Redis natively targets Linux/Unix environments, I went searching to see if there was a Windows port of the Redis database project that included the ability to run it as a service.  I found two open-source projects that, when combined, allow me to run Redis on Windows as a Service.


Running Redis DB on Win-7 x64
Installation Notes


The first thing I did was acquire a Windows port of the Redis server. I ended up using this project's compiled-version: https://github.com/kcherenkov/redis-windows-service. The project is described as:
"Windows 32 and x64 port of Redis server, client and utils. 
It is made to be as close as possible to original unix version."
That "close as possible" statement mainly refers to how Redis commands that would (under Unix) rely on fork() to perform background operations are implemented as foreground operations (in Windows port).  But, for purposes of my software development and testing, this would suffice.  I can run my "production" instance of Redis on one of my Linux virtual machine instances (in particular, I have it running on OpenSuse 12.1 x64).

Get the Redis.io for Windows Build
To begin with, download the actual Redis.io (for windows) builds from here: https://github.com/dmajkic/redis/downloads (In my case, I selected the latest x64.zip build, which was redis-2.4.2-win32-win64-fix.zip).

Within that zip-archive, you will see two sub-directories: one is "32bit" and the other is "64bit".  Those zip file directories include the redis-server.exe and redis-cli.exe files (and redis.conf configuration file, etc).  Simply copy the contents of the archive's "64bit" (or 32bit) directory into the chosen directory where you will run Redis from. For example, I placed the x64 files into c:\Redis\

Theoretically, I simply needed to get the redis-server.exe running as a service now...


How to Install Redis DB as a Windows Service:
Win-7 x64 Installation Notes

OK, I have the Redis for Windows executables in my c:\Redis\ directory. Now it is time to get this database running as a Windows Service.  I found one such project that appeared active enough to merit consideration: (link) Run Redis as Service on Windows project on GitHub.

You need to download the compiled executable (RedisService.exe), which is available as a rather small (7 or 8KB) file on the "downloads" page for the project, and place it in your Redis directory.

Note: you may wish to reference this Microsoft site: using SC to create a service if you wish to understand in more detail what the upcoming commands I discuss are doing.

Although you may experience issues (as I will discuss next), you are now supposedly ready to install and start the RedisService.exe (from the command-line in a Windows console window) with the following command (note: alter the "Redis242" service-name to whatever makes sense for you as a process-label; also, change c:\redis portions to whatever directory location you chose):


sc create Redis242 start= auto DisplayName= Redis242 binpath= "\"C:\Redis\RedisService.exe\" C:\Redis\redis.conf"

IF the above statement *appears* to work, the service may or may not start when you execute the following: 

sc start Redis242


But, if you experience some of what I did, the service may be failing for what I will call "hidden" reasons...

Fix Redis Windows Service Problems
and Potential Issues to Workaround

What I discovered with this RedisService.exe windows-service for Redis is that it is quite typical for open-source code: it makes a lot of assumptions and does little to provide proper dependency-testing and meaningful error-condition notification.

When you create the service (per above code: sc create ...) and/or try to start the service (using sc start) it may appear to just "hang" or otherwise take a very long time to attempt to start prior to failing with timeout errors.

The reason for redis-windows-service failing to start properly will be obfuscated, and here are some reasons why:
  • Starting the Service will throw 1053 (timeout) errors without indicating why, but one possible failure reason is that you must have the .NET Framework 4.0.30319 installed for this service to work.
  • Next, depending on your security setup (like my Windows-7 Pro security settings), you may need to tell Windows Firewall that it is OK for this process to act on your local network.  The easiest way to do this is run the redis-server.exe from the command-prompt and allow it access (to local network, through Firewall) when prompted.
  • Next, if you attempt run the redis-server again, you may see another (otherwise hidden) issue in that the executable is not from a "trusted source" or such: again, this issue can be resolved by choosing to allow this un-trusted process to run when provided the option.

After resolving this list of potential issues, you should be able to execute the sc create command and then perform an sc start redis242 (or whatever name you gave the service), to start the Redis Windows service and no longer experience a 1053 error due to timeouts caused by hidden reasons.

Redis Windows Service-Shutdown Problems

Note: there are problems with shutting down this service!  So far, the only way I have found to truly stop it is to reboot my system.  Also, when attempting to delete the service (sc delete redis242 or such), you will not be able to truly delete it as long as any Windows Service-Manager windows are open.

Once the RedisService.exe is installed and actually working, even "sc delete" requires a system reboot to take effect, since you can not otherwise truly stop the service.

The good news...
Although this service is problematic (as of when I wrote this tech blog entry), the program will run as a service and the client (redis-cli) can now be executed against the service-induced redis-server to test SET/GET of keys, etc.

If you are interested in accessing Redis.io from JavaScript, you may want to read my blog about NPM (Node Package Manager) where my example for installing a Nodejs module used a node-redis module.  I am able to access my Redis (NoSQL) DB from both Linux and Windows versions of Nodejs via the node-redis module's functionality from within Javascript.

Tuesday, November 29, 2011

NPM (Node-Package-Manager) Package Install Error

NPM (Node-Package-Manager) Package Install Errors:
Misleading Error Messages

I recently found myself encountering some nearly useless NPM errors when I was trying to install the redis-node package (i.e., Redis.io database connectivity layer for Nodejs), and it did not instantly occur to me why I was seeing error messages during NPM's archive-unpacking operation, especially as the Node-Package-Manager was dumping out a list of errors that made little sense.

Under Linux, NPM install was producing errors that implied I did not have permission to the tmp / temporary directory or directories the package was being unzipped/unpacked into for installation from (i.e., the temporary location the tarball / tar.gz file would be unpacked in).  Under Windows, the same NPM install (under git-bash window) produced different errors that made it appear the downloaded .git package could not be unpacked for god knows what reason, all hidden in a massive error dump that had nothing to do with the REAL reason for the error.

Well, I figured out why the errors occurred (as detailed in this blog entry), and I also learned that this NPM software is a perfect example of what I consider a non-user-friendly interface in that it presents the user with all sorts of completely meaningless error messages in the event of very simple-to-detect issues.  Bottom line: the node package manager is written by geeks, for geeks.  This is not "enterprise grade" software by any means, as it does not have robust error-detection/traps/messages.  In fact, the error-traps it does employ seem to mislead more than assist: this is simply poor design.  (note: I give credit to ANY open-source effort like this though, and I understand why people focus on other functionality vs. "usability").

Node Package Manager Windows Behavior and NPM Error Messages due to using wrong .git file URL

I have been using Node (Nodejs) for "server-side javascript" development on a Windows 7 Pro x64 machine, with Node "installed" as simply the Node.exe download placed in a local directory of c:\node

From within that directory (using the git-bash terminal window on Windows that was installed with the Git SCM tool version 1.7.7.x Windows .exe installer), I executed the following command:

c:\node>node ./npm install -g https://github.com/mranney/node_redis.git

...but, that produces the following error dump. Can you see why? It was all too clear to me a bit later...
c:\node>node ./npm install -g https://github.com/mranney/node_redis.git


npm ERR! couldn't unpack C:\TEMP\npm-1322598499268\1322598499268-0.7676450146827847\tmp.tgz to C:\TEMP\npm-1322598499268\1322598499268-0.7676450146827847\contents
npm ERR! Error: ENOENT, no such file or directory 'C:\TEMP\npm-1322598499268\1322598499268-0.7676450146827847\contents\package\package.json'
npm ERR! Report this *entire* log at:
npm ERR!     <http://github.com/isaacs/npm/issues>
npm ERR! or email it to:
npm ERR!     <npm-@googlegroups.com>
npm ERR!
npm ERR! System Windows_NT 6.1.7601
npm ERR! command "node" "c:\\node\\npm" "install" "-g" "https://github.com/mranney/node_redis.git"
npm ERR! cwd c:\node
npm ERR! node -v v0.6.3
npm ERR! npm -v 1.0.105
npm ERR! path C:\TEMP\npm-1322598499268\1322598499268-0.7676450146827847\contents\package\package.json
npm ERR! code ENOENT
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR!     c:\node\npm-debug.log
npm not ok

Notice the URL I mistakenly specified for the install package... THAT is what causes the error.  Yes, something as simple as copying and pasting the incorrect URL from the project's github page will lead to this tragic error-dump.

This is where I went wrong:

...the default URL shown in that textbox on Github (image above) is for the HTTP version of the URL.  I had accidentally copied that and pasted to my command prompt for my "npm install" command, where instead I needed to do this:


Now I could copy and paste the proper git:// prefixed URL address of the node_redis.git package that I wanted to install with NPM.

c:\node>node ./npm install -g git://github.com/mranney/node_redis.git

...which produces the simple one-line output as a result of a "successful" package install:

redis@0.7.1 c:\node\node_modules\redis

Ah, that is better!

Now, on to what this same issue presents like under Linux...

Node Package Manager Linux Behavior and NPM Error Messages due to using wrong .git file URL

I have been using Node (Nodejs) under OpenSuse 12.1 x64 KDE.  When I first encountered this issue under Windows, I quickly jumped over to my Linux VMware virtual-machine to see if for some reason it was a Windows-implementation-only issue (since, Node and NPM have been more mainstream on Linux, and the NPM under Windows is considered "experimental" yet).

This quick test under Linux helped me see the error in my ways quickly, since I encountered a similar mess of misleading error-messages spewing forth from the node-package-manager when I copied and pasted my npm install command to Linux and executed it:

~/node> node ./npm install -g https://github.com/mranney/node_redis.git

Yes, it failed similarly to the Windows NPM version, but with an error message (pasted here) that sure made it appear like the the "tar" (unpack) command failed for some inability to work with the temp .tgz file created as part of the install process...


npm ERR! Failed unpacking /tmp/npm-1322603639405/1322603639405-0.840085425414145/tmp.tgz
npm ERR! couldn't unpack /tmp/npm-1322603639405/1322603639405-0.840085425414145/tmp.tgz to /tmp/npm-1322603639405/1322603639405-0.840085425414145/contents
npm ERR! Error: `tar "-zmvxpf" "/tmp/npm-1322603639405/1322603639405-0.840085425414145/tmp.tgz" "-o"`
npm ERR! failed with 2
npm ERR!     at ChildProcess.<anonymous> (/home/mike/node/npm/lib/utils/tar.js:217:20)
npm ERR!     at ChildProcess.emit (events.js:70:17)
npm ERR!     at maybeExit (child_process.js:359:16)
npm ERR!     at Process.onexit (child_process.js:395:5)
npm ERR! Report this *entire* log at:
npm ERR!     <http://github.com/isaacs/npm/issues>
npm ERR! or email it to:
npm ERR!     <npm-@googlegroups.com>
npm ERR! 
npm ERR! System Linux 3.1.0-1.2-desktop
npm ERR! command "node" "/home/mike/node/npm" "install" "-g" "https://github.com/mranney/node_redis.git"
npm ERR! cwd /home/mike/node
npm ERR! node -v v0.6.2
npm ERR! npm -v 1.0.106
npm ERR! 
npm ERR! Additional logging details can be found in:
npm ERR!     /home/mike/node/npm-debug.log
npm not ok


Having performed successful node module installs with NPM under my Linux host already, I knew that I must be missing something obvious.  And, without any help at all from those meaningless error messages, I finally saw that I pasted the "https://" address of the github node module address instead of the git:// version.  Fixing this mistake, the module installed just fine under both Linux and Windows.

This revelation (about my mistaken "https" vs. "git" prefix), coupled with my frustration at misleading NPM error messages, led to the writing of this blog in case anyone else runs into this issue and gets misled by utterly meaningless error messages.

I hope someone working on the NPM (package manager for node) will eventually make time to implement proper condition-testing (and meaningful error-reporting) to test the format of command-line parameters as important as the actual package/module URL if the format makes such a tremendous difference in results.  Had this software simply tested the URL for a valid format, and told me "sorry, you must use git:// prefix" or suggested another equally effective alternative, that would be ideal and would save myself, and surely others, from wasting time with misleading error messages.


Redis (Redis.io) Database via Node

As an aside, I can report that I am able to access my Redis (NOSQL) DB from both Linux and Windows versions of Node via the node_redis module's functionality from within Javascript.  I hope to add more blog material about my experiences with installing and using Redis with Node at a later date.

Tuesday, October 25, 2011

Saving Money in Information Technology and the Data-Center

Green Technology : Energy-Savings Techniques

Saving Money by Reducing Energy Consumption

I was just reading the Computerworld top 12 green-IT organizations list for 2011, and after many pages of reading, it was clear that certain energy-savings strategies were quite common across the range of companies featured in the article. I am all for saving energy, as this not only reduces the carbon-footprint of an organization, but it also saves money!

Some of the techniques noted should spur ideas about how you and your technology organization can save money by focusing on being "green".  Here is a summary of how the top-12 companies made substantial strides in reducing their energy consumption through energy-savings initiatives that led to real and measurable financial cost savings too:

  • Virtualization: this is a significant savings opportunity for many of the companies featured in the roundup of green IT organizations.  Virtualization is moving beyond server and network virtualization and into the virtualization of desktops also, for some of these companies.  The ultimate objective of this push into virtualization is to consolidate (computing) workloads onto as few machines as possible and achieve very high physical-machine utilization rates.

    Allstate Insurance was featured for its ability to reduce its total computing server and device count by 3,000 units while realizing a cumulative energy-reduction of nearly 40% in the past several years. NBC Universal virtualized 60% of their physical servers and shut down 2,000 physical machines. Northrop Grumman posted substantial energy savings through widespread virtualization also, and they are considering thin-client desktops for further savings. Citigroup has gone as far as to require all new servers be virtual, unless physical servers are justified; this has reduced their power and cooling requirements by 73%.

  • Cooling system efficiency: modern data centers pack incredible amounts of computing power into confined areas, and as such they require substantial cooling systems (i.e., air-condition).  Kaiser Permanente was a focus company in this area, as they achieved energy-consumption savings across their three data centers that are nothing short of incredible, "cutting an eye-popping 7.2 million kilowatt-hours of power from overall data center operations [this year] -- and over $770,000 from power budgets."

    How did they do it? They focused on everything from sealing up air leaks (which provided the biggest win) and a sophisticated real-time monitoring system for measuring cooling system efficiency.  They were able to reduce cold-spots in the data center (which are signs of inefficiency) and avoid overprovisioning power distribution and cooling infrastructure in general.  NBC Universal was also mentioned for having implemented similar smart power distribution units and rack-level environmental and power metering sensors, allowing them to increase rack densities by as much as 200%.

  • Raising ambient temperature (in the data center): the IT group at KPMG was featured for its efforts, which included "raising the ambient temperature in the data center to improve efficiency by more than 5%, raising the temperature of the water in the cooling tower to improve efficiency by 5%".  Raytheon was also featured for how IT cut energy use in each telecommunications closet by 30% when it raised the temperature by 10 degrees Fahrenheit to 75 degrees

    One related news bit I did not see mentioned in this Computerworld article was how Dell has recently announced that some of their servers (currently the R610 1U and R710 2U varieties) are capable of running at what is essentially typical outdoor-ambient-air-temperatures (in nearly all of the USA, and nearly all year round).

    These run-hot-capable Dell products (and Dell, the company and the stock: NASDAQ:DELL) are worth watching, as the energy-savings implications are huge for these high-operating-temperature Dell servers. There are also Dell storage arrays, switches, and power-distribution units that are certified to run hot.  I expect more computers (especially servers) will end up being certified to run hot like this, which could vastly reduce cooling costs for IT facilities as ambient air temperatures could approach outdoor levels (as such, lower cost air-movement by way of fans vs. chillers could perhaps become the norm for much of any year). 

  • Blade-Server technology: KPMG was again featured for its migration to blade server technology, with "the average blade server consuming about 50% less power than a comparably configured rack-mounted server".  Other firms were mentioned for their push to increase rack computing densities through such means.

  • Data-Compression and storage de-duplication to reduce physical hardware required to house company data.  The finance firm State Street was noted for its efforts in this area where they reduced storage use by between 40% and 50%.  This should lead to about a similar percentage savings on both power and equipment costs related to storage.

  • Private Cloud Computing and High-Performance-Computing (HPC) Clusters were mentioned by some of the top-12 energy-saving firms. "Baker Hughes created a high-performance computing (HPC) cluster that incorporates wake-on-LAN technology. In this environment, machines are turned on -- or "woken up" -- via a network message, and the company can wake up machines for use in the HPC cluster as needed. This setup uses 40% less energy than a dedicated HPC pool."

  • Solar Panels have been installed by some of these firms (KPMG), and surely other type of alternative-energy will become more common in data-center planning in the future.  I didn't see any mention of things like backup power based on modern fuel-cell technology or micro-turbines and such, but I would expect that some of this is in play for various firms.

  • Telepresence to reduce travel. Various firms, including KPMG and Allstate, were mentioned for taking steps to reduce employee travel, not just locally, but nationwide inter-office travel as well.  Technology for streaming video and web-conferencing are employed to make this possible.

    To me, it seems incredibly obvious that one of the largest "green" moves any firm can make is to enable workers to work from home whenever possible.  Though, simply put, I do believe that many workers are just utterly incapable of managing their time and remaining focused while working at home as they let all the distractions of home interfere; this explains why, with all this modern technology that would make a vast number of jobs possible from home, there still is not widespread work-from-home opportunity.  It will probably require $10/gallon gasoline to make a substantial change in this area.

Tuesday, October 18, 2011

SVG onload event not firing : Firefox bug / feature with Shortcuts

FireFox not firing SVG onload event

(Windows) Shortcut handling to blame...

I do a fair amount of work with SVG (Scalable Vector Graphics) images / files that contain embedded JavaScript for various event-driven interactive-SVG components. The onload() event, within SVG files, is something I regularly use too. Today I ran into a strange "feature" or "bug" that shows up in Firefox but not in the Google Chrome / Chromium browser — related to this onload event in an SVG document.

I use Chrome as my default browser, especially because I like the included Developer tools a lot, so most initial testing of my web-page HTML, SVG, and JScript code takes place in Chrome before I move onto testing in other browsers (like FireFox). I have been working on my custom SVG RAD Components (tis' what I currently call them), and because I use one particular .SVG file for the main "test rig", I kept a Windows Shortcut on my Windows 7 desktop for a quick reference to that .SVG file.  I just click the shortcut to launch my SVG "application" (in Chrome) or drag the shortcut into a Chrome tab, and that works just fine.  Ah, but not in FF!

FireFox apparently does not resolve shortcut properly if dragged into browser

Being a creature of habit, when I was ready to test my latest SVG file and Javascript code within Firefox (using version 7 currently), I dragged my Windows shortcut (to my SVG file) onto the FF browser and poof... it seemed to load the SVG file, but my onload() event code simply failed to run.  I would have sworn I did this exact same drag-to-load (my SVG) with prior versions of Firefox successfully, but either way, it is not working now.

So, I loaded the page again via drag-and-drop of my Windows-shortcut to my SVG file, but this time with Firebug running (debugger / developer tool).  I quickly see that an error is being generated whereby the event-code referenced in the onload() event was shown as "myOnloadFx is not defined" within the onSVGLoad() code in FireFox (evt=SVGLoad). Clearly something strange was going on here, as this code "works" and has worked in Firefox before.

I played around with the code inside the SVG file a bit, and moved the onload() code from the opening SVG tag's onload="myOnloadFx()" to an inline-script (using <script> tags) just before the SVG's closing tag... and, the problem persisted.  So, what the heck?  After wasting more time on this than I ever should have, I then decided to go to the directory in which the SVG file really existed (vs. using the Windows shortcut to open it), and I dragged the .SVG file onto the Firefox window where it opened fine and ran the onload() event code just as Chrome did.  So, the shortcut-dereferencing/resolution is apparently to blame.


FireFox : want your SVG Javascript onload to fire? Do not open the SVG by dragging a shortcut onto FF


Now I know.  Note: the code executed in my onload() event was in an external Javascript file that is "included" in the SVG by way of code like this: <script type="text/javascript" xlink:href="myExternalSVGcode.js"/>  

Firefox is not properly converting, storing, and subsequently referencing the proper file-locations for included-code like this, but is instead looking in the directory where the shortcut appears (in my case, the desktop).  I confirmed this to be the problem simply by moving a copy of the Javascript (referenced) file onto the desktop along with the Shortcut (.lnk) and voila!  It "fixed" the issue.  UNREAL. 

I have not tested other types of scenarios where this could be a problem, and it is unlikely most people will ever encounter this unless they do software development (and perhaps even just SVG/Javascript with included external javascript files).  But, just in case, I figured I would post my notes here for anyone else that may encounter this weird onload behavior in Firefox.

Friday, October 14, 2011

Embarcadero Delphi XE2 / FireMonkey Review Findings : Not Ready, Incomplete

Delphi XE2 : Reviewers / Users Not Pleased

UPDATE: Delphi XE3 release date is here, and I have posted a blog entry about Delphi XE3 New Features; some of the concerns with XE2 / FireMonkey may be resolved with FireMonkey2 / XE3 improvements. 






I am a long-term user of the Embarcadero Delphi (formerly Borland, Inprise, and Codegear branded Delphi) RAD (Rapid Application Development) IDE, programming language (which is object-pascal) and VCL (Visual Component Library).  Recently I wrote a blog about the Exciting New Features in Embarcadero Delphi XE2 and how much I was looking forward to putting the new FireMonkey components to use in particular.  Also, while reading this, keep in mind the fact that (for me) FireMonkey was the one thing that was going to keep me interested in this product for my Windows development platform of choice.

Well, here is the simple fact of the matter: Delphi XE2 / FireMonkey is not ready for prime-time.  I have discovered that I am not alone in this opinion, and it appears that Embarcadero has done itself no favor by releasing what many consider an unfinished product.  Perhaps I am premature in predicting the final demise of this platform and product, but that is essentially what I am doing: it is over... at least for me.  I have used Delphi since version 2.0, going all the way back to 1995, and have been a loyal and avid fan of the development language, IDE, and components.  But, I am 99% sure that I am finally calling it quits. Here's why...

Poor and Missing Documentation

I have totally had it with the total CRAP documentation that has come with Delphi since ever since Delphi 7.  I have heard promises of improvements for years, and with every small improvement comes a host of missing documentation, bad or missing hyperlinks within the documentation, a poor (and god-awful slow and inefficient) documentation-browsing interface, etc.  I long for the old days where I could quickly find all the help I needed and quickly navigate links to related information; those days are long gone.

I just can not take it anymore.  It is absolutely inexcusable how pathetic the documentation efforts have been, regardless of which company has owned the rights to Delphi (Borland, Codegear, Embarcadero).  Clearly documentation is barely as step above an afterthought for these companies lately as they probably consider it nothing but an "expense"; never mind that this expense is what leads to a product I actually may want to purchase!  With all sorts of new features simply nowhere to be found in the documentation, my level of frustration is epic.  I have seen as good, or perhaps better, documentation efforts for open-source projects (you know, all those projects that are notoriously lacking documentation). 

FireMonkey Bugs, 64-bit bugs, Missing Features

I am not going to re-hash everything that people are experiencing with FireMonkey, as it just furthers my frustrations with this incomplete and buggy product.  But, here are some of the things that are worth noting, as pointed out on blogs like Delphi Haven and AnalogMachine:


  • the omission of Actions and action lists
  • aside from the very basics, Keyboard handling is crippled
  • Property/method reference documentation (as I more than hinted at earlier)
  • TMemo has serious bugs.
  • Substantial differences (and missing capabilities) in FM form-designer behavior vs. VCL-forms designer behavior.
  • the 64-bit Compiler has some serious bugs.
  • And many more...
Delphi is now looking more and more like a total dead-end for me and certainly will not help my career in the least bit; I am no longer willing to wait for bug-fixes to back their technological improvements that ship without proper testing.  And, I do not want to continually pay for the right to deal with such poor quality.


Delphi XE2 Update-1 and Update-2

Embarcadero has released the first couple "updates" (Update 1) for Delphi XE2 and (Update 2) for Delphi XE2 that will supposedly begin the process of correcting some of the deficiencies in this product. But, I am at the point I do not care. Why? Because we all know (based on past behavior) how this will go... they will issue approximately 4 "updates" (i.e., BUG FIXES) to the product during its life-cycle and before Delphi XE3 is released, at which point you then will be forced to PAY AGAIN for what should be free and continued bug-fixes!

And, like darn near every single "release notes" published in the past 5 years by these folks, this XE2 Update 2 release-notes document also has screwed up hyperlinks to the supposed "list of fixes".  Currently, in the "General" section of this document, see where it says (I cut/pasted this from their site):
"For a complete list of the specific problems fixed in this update, see the "List of Bug Fixes in Delphi and C++Builder XE2 Update 2" at: <a href="http://edn.embarcadero.com/de/article/40984/">http://edn.embarcadero.com/de/article/40984/</a>"
Well, guess what folks... the above quoted HTML really links to some OLD Delphi XE Update 1 list of fixes and even points to the German location (/de).  THIS IS SO TYPICAL!  AGAIN, WHERE IS THE QUALITY-CONTROL!?  I really would have liked to see what is truly "fixed" in this XE2 update.  Ughghgh!  Do we now have to post a QC (Quality Central ... heh, "quality", yeah... ok) entry to report this messed up URL?  Again, this is TYPICAL and has been the case for years.  I think it is done ON PURPOSE since they really have no list of updates prepared!  But, who is dumb enough to keep doing this and keep peeving their customer base?!

This practice of shipping bug-ridden, unfinished software products is common among MANY software firms these days... they put out an unfinished/buggy product, patch it a few times for "free", then force users to purchase yet another "upgrade" to get any future fixes/improvements regardless of how strong the case is that there are still substantial bugs in the product you PAID for!  It gets old, and it gets very darn expensive! And, I have seen long-standing substantial bugs go unfixed through multiple major-product-versions/iterations (Delphi and their help-system clearly have had long-standing substantial issues).

Delphi XE2 : the end of the road for my Delphi use

For the reasons laid out already, and for the added fact that there are VERY FEW Delphi Jobs / Contracts available anywhere these days, I am leaning toward totally ditching closed-source development tools like Delphi.  There are some great free and open-source products and technologies out that I am going to be evaluating in depth and preparing to move to.

I am likely go to just HTML5 / SVG / JavaScript wherever possible, and I may totally ditch "native Windows applications" except for those situations where I absolutely require the performance and sophistication I can only achieve with compiled code.  I have prior versions of Delphi that I will continue to use to support some of the systems I have written for my own use of course, but I also plan to begin the inevitable migration to other technologies.  I am not looking forward to migrating my systems include lots of SQL-Server interaction (using VCL dbExpress or ADO components), but such is.

Although I believe Delphi has provided a super-productive development environment and language for developing Windows applications over the years, I also see the writing on the wall: Delphi is a niche tool whose niche continues to shrink as people move to web-based everything.  And, Embarcadero is not helping Delphi's cause when they release buggy unfinished software with equally pathetic documentation.  Perhaps I will again assess Delphi when the inevitable Delphi XE3 appears on the scene, but for now, I am avoiding any further investment of my time and money into this product.

UPDATE (January 2012) for anyone still interested in Delphi XE2: Embarcadero has released Delphi XE2 Update 3 and has put a new Delphi XE2 ISO Image with Update 3 online.  I have not taken time to download and install it, though I did read through the release notes of fixes in Delphi XE2 Update 3, and it appears they are slowly working through the massive pile of major errors and issues.  Surely it must be "better" than the initial premature release of XE2. But, I am still a major skeptic and will only consider a hands-on Delphi re-evaluation when Delphi XE3 is released -- and, Embarcadero had better take time to ensure a much higher quality product if they are to convince me to ever use their development tools again.

Saturday, September 17, 2011

VMware Workstation 8.0 New Features

All of us eagerly awaiting VMware Workstations 8.0 could probably have guessed that its official release date was quite near given recent discussions about related VMware server products, like the blog I did just a couple weeks ago about VMware ESXi 5.0 New Features and vSphere 5.0 New Features.

There were hints of what to expect with the desktop applications aligned closely with the ESXi/vSphere products (VMware Workstation 8.0 for Windows and Linux environments, and VMware Fusion 4.0 for the Apple/Mac crowd). One obvious "hint" you may have noticed during the ESXi 5 / vSphere 5 release was the New (Version 8) Virtual-Machine format with 3D (Windows Aero) and USB 3.0 Support. And, from what I am seeing in the new Workstation 8.0 features, the interrelation between ESXi/vSphere and Workstation are perhaps closer than ever before; or that seems to be the strategy VMware is pursuing with some of the more substantial new features in VMware Workstation version 8.0!


New Features in VMware Workstation 8.0



IMPORTANT NOTE: a relatively modern 64-bit x86 CPU is REQUIRED on your host-system for this new version of Workstation! (i.e., EM64T Intel chips or AMD64)

Simple and Powerful Interaction with vSphere/ESXi 5.0



As VMware's web page emphasizes as a substantial new feature in Workstation 8.0: With Workstation 8 we have embraced the cloud. Workstation 8 can remotely connect to virtual machines running on vSphere, vCenter, and even another copy of Workstation on your network. That should surely get your attention if you are like me and run both ESXi (on a networked server) and Workstation (on my desktops) and hope for an improved way to utilize your virtual-machine infrastructure and investment.

Keep in mind, I am approaching this blog topic from the perspective of a software developer that uses a variety of development virtual-machines under both virtualization products (ESXi/Workstation) and this is not necessarily about best-practices or ideals for your virtualized production systems. I welcome the new features in VMware Workstation 8.0 that will simplify my use of both products, and one thing that Workstation 8.0 will allow me to do now is use this new Remote Connection feature (as an alternative to the vSphere client) to access my ESXi VMs and power then on/off, suspend/reset them, clone/snapshot them, mount DVDs/ISOs, and even alter their hardware profiels (e.g., memory, disk, NICs, etc). Being able to do this through a single UI (the Workstation 8 application) will certainly cut down on some redundancy/clutter on my development desktops.

This new Remote Connections ability will allow you (via the new Connect to Server feature) to connect to remote hosts running Workstation, ESX 4.x and later, and VMware vCenter Server. After you establish a connection to a remote host, all of the virtual machines (permissions taken into account) on the remote host are available to you in the virtual machine library. The connection-steps (to a remote ESXi server) are much like you would expect (i.e., not too different than the vSphere login):
  • File (menu), "Connect to Server..."
  • Provide the ESXi Host/vCenter IP address (as Server Name) and the associated user-name/password; then "Connect".
  • You may be presented with an information box about the server (security) certificate presenting some problems, and you can choose to "Always trust this host with this certificate" and "Connect Anyway"
  • ...from this point, you will see the VMs you have access to and be able to interact with them much like you would in vSphere client.

One thing I have found very annoying (with ESXi 4.x and Workstation 7.x) virtual-machines is that I have had no simple way to quickly change what VMware product I was running a particular Virtual Machine within (a situation most typically encountered when it turned out that I needed more *graphical* speed for some highly-interactice development application or such that ESXi was not particularly optimal for when using just a "console" view of the hosted ESXi VM).

So, if I had a VM that I was using on my ESXi server, and I suddenly wanted to host that VM within my VMware Workstation environment instead, my options were a bit limited: basically I found myself using the VMware Converter (most recent version being 5.0 if you wondered) to perform the moves and/or copying the various directories (of files making up the virtual-machines) around and then, worst of all, was the inevitable differences in VMware Tools versions that I would encounter, which, when updated, then would cause my development-VMs running MS-Windows (especially Windows 7!!) licenses to decide they were in need of "Activation" again! (the real issue here is how messed up this Microsoft licensing-mess is, especially when as a Microsoft ISV/Partner you are supposedly entitled to a fair number of installs and/or activations without issue, but those activation-counts can get consumed quickly if you are migrating VMs regularly and triggering their hardware-change-detection crap or whatever!)

Given that vSphere 5.0 included the new (Version 8) Virtual-Machine format with 3D (Windows Aero) Support, perhaps this will alleviate some of the bottlenecks I have seen with certain graphically-intensive development-environment UIs (though, most UIs run "acceptably" for me even under vSphere 4.0, some were lagging the response I can achieve on a local Workstation VM). But, what if I want to still move VMs between the various virtualization products for some reason?

Well, according to the release-notes: "Upload to vSphere: Workstation 8 enables users to drag and drop a VM from a user's desktop to VMware vSphere. (from what I can gather, this is being accomplished by essentially using the VMware OVF Tool — a command-line utility that allows you to import and export OVF packages to and from a wide variety of VMware platform products — behind the scenes). This feature allows users to deploy a complete application environment from a PC to a server for further testing, demoing, and analysis." OK, that is pretty cool! And, it is rather simple:

  • First, the VM you want to drag-and-drop to ESXi/vSphere must be powered down...
  • Next, drag (your local VM under "my computer" or whatever) to the target vSphere host-IP, which I have presumed you have remote-connected to already (and listed within same tree-view showing your available VMs)...
  • One you "drop" it (release the mouse button) on the target-host, you will be presented with an "Upload Virtual Machine Wizard" that will prompt you to choose the desired VM-name and datastore (on ESXi)...
  • ...the move-process will now run (though, some caveats may exist)...

What happens if you drag-n-drop a local VM to a vSphere server that is using a different version of VMware Tools (i.e., hardware profile)? Well, the process does apparently check for that condition and will warn you if the target-host is not compatible with your virtual-machine's current hardware (e.g. if you try to drag a Workstation 8 / virtual-hardware-version-8 VM to an ESXi 4.1 host that would not support this newest version of the virtual hardware).

Can you drag-and-drop an ESXi/vSphere VM to VMware Workstation 8? Good question! I have not found any official reference saying that you can drag and from VMware vSphere to Workstation. I am in the process of setting up some test VMs to play with different possiblities and test the limitations of all this. Again, one of my main concerns is that I do not trigger the darn Windows-7 activation and license crap every time I test this stuff, so I plan to first test this with some Linux VMs and other non-activation-infested OS options first and then try on my "real" development OS VMs in more detail. If anyone has fully tested whether Windows-7 (especially Pro, 64-bit) will trigger a re-activation when migrating between ESXi 5.0 and Workstation 8.0, please let me know.

Next, though I focused on ESXi hosts for inter-machine VM sharing, keep in mind that statement I first pointed out from the release-notes that clearly stated you can share remote connections with hosts running Workstation, VMware vSphere, and VMware vCenter(TM). VMware is pushing this ability as a way to allow VMs to be accessed by teammates, providing a quick way to test applications in a more production-like environment. This looks simple enough, as you just use the "Share Virtual Machine Wizard" and set permissions on the shared-VM. I don't have an ideal test environment where I can try this yet, but perhaps later.


New Virtual Machine Hardware Capabilities, etc.


To me, the biggest part of this Workstation 8.0 release is the new remote connections and vSphere interchange feature, but there are definitely some other new features worth mentioning. In summary, quoted from VMware's release info: "Improved Virtual Machine Capabilities: With support for HD audio with 7.1 surround sound, USB 3 and Bluetooth devices, Workstation 8 delivers new levels of virtual machine performance. In addition, improvements to virtual SMP, 3D graphics performance and new support for 64-GB RAM allows users to run the most demanding applications in a virtual machine.

That all sounds good to me. I definitely welcome any improvements to graphics-performance within my virtual-machines. In addition to software development and programming environments (which are becoming more graphically intensive all the time), I also run VariCAD within a VM (for 2D / 3D CAD work when I am visually brainstorming some new "invention" or whatever). Such applications can always benefit from optimization to the VM graphics drivers; hopefully this brings performance closer to "native" levels.

The USB 3.0 support is interesting, but I really do not care about the HD Audio features (I rarely listen to music on my computer) and I also do not use BlueTooth for anything currently. To me, the ability enable Virtual VT-X/EPT or AMD-V/RVI in the processor settings interface (allowing a guest can take advantage of these virtualization technologies) is more intereting, as it the ability to run 64-bit guest operating systems inside of vSphere running inside Workstation. I do find it a bit odd how VMware allows for 64GB RAM with Workstation while they rather "cripple" ESXi 5.0/vSphere 5.0 with their latest (and controversial) licensing scheme (which, if you did not hear, they did "give a bit" on, but it is still not as nice as what ESXi 4.x offered).

There are some really compelling new features in this latest release of Workstation 8.0, especially if you run a heterogeneous virtualization environment that includes ESXi 4.x / 5.x and/or vSphere 4.x / 5.x in addition to Workstation. If you need more details, I suggest going to VMware's website and checking out the various information they have online.

Tuesday, August 30, 2011

Embarcadero Delphi XE2 New Features of Interest

Embarcadero Delphi XE2 Review of New Features — VERY Interesting Features!

I have been anxiously awaiting the official Embarcadero Delphi XE2 release now that this RAD (Rapid Application Development) IDE and Component-Set are poised to bring about some of the most exciting changes in Windows (and cross-platform!) application development seen in a long time. Yes, Embarcadero (formerly Borland, Inprise, and Codegear branded) Delphi is finally stepping up the application-development game and introducing some exciting technology to address shortcomings in modern MS Windows business-software-applications development.

Although many will cite the most interesting new features as those being support for cross-platform development, Windows 64-bit, Amazon Cloud API, Native iOS support, for me the obvious "killer feature" is the new vector-based UI-development component suite and technology called "FireMonkey". The reason I choose this as the "killer feature" is because 1) I have felt that Windows-UI development has been rather "stale" for years when building *native* (compiled executable applications), and 2) this technology is what certainly makes large portions of other features (like cross-platform development) even possible, and finally 3) I consider this technology capable of building real mission-critical business applications

Delphi XE2 FireMonkey : Vector-based User-Interface Development

Delphi XE2 FireMonkey: Could it be a Disruptive Technology?

FireMonkey is the moniker Embarcadero has applied to their new scalable vector graphics based Graphical User Interface (GUI) framework that is leveraging the capabilities of modern GPUs (Graphics Processing Units) for hardware accelerated cross platform GUI’s. If that sentence did not make it obvious: this is BIG, people!

FireMonkey really could be the disruptive technology we have been waiting for with regards to developing compelling UI's for business applications. It is about time scalable vector-graphics melded with mainstream business applications (and yes, I am aware that Flash and Silverlight are vector-based technologies; I just still do not consider them to be something I want to build any large-scale enterprise applications with).

If Embarcadero is successful at marketing this vision of how modern UI's should be developed, we could be on the cusp of a huge shift in *native* application UI technology (note: I consider HTML5/JS advancements also disruptive, but in a different way and for a somewhat different target-market).

So, vector-based User Interfaces (UIs) are soon to be a reality for Delphi developers and the software applications they create (and therefore Microsoft Windows environments), but FireMonkey holds even more promise than simply modernizing our UIs — this technology is what will allow resulting UIs to now be cross-platform capable. FireMonkey provides UI elements that will ultimately look the same across the various deployment targets: 32-bit Windows applications for Windows 7, Windows Vista and XP and Server OS's... 64-bit Windows applications for Windows 7, Windows Vista and XP; Server 2003 and 2008... and even Apple OS X 10.6 and 10.7 applications and iOS 4.2+ applications. What? Apple? Since that cross-platform development tool news is going to be of substantial interest to many people, I will discuss that later in this blog;likewise you have taken note of the 64-bit reference which will also be discussed in more detail.

Delphi XE2 FireMonkey: Where did it Come From?

FireMonkey is based on VGScene/DXScene, which was created by KSDev (Eugene A. Kryukov), and then purchased by Embarcadero quite recently (late 2010 - early 2011). KSDev sold VG/DXScene as a VCL component package prior to the acquisition, and KSDev's final release on 1/13/2011was considered "feature complete".

KSDev marketed their components as: "a Delphi VCL library for WPF-like framework with advanced controls, styles, graphics and effects", with the core functionality being built around a powerful vector-engine (similar in concept to how Adobe Flash works) with modern features like real-time anti-aliased vector graphics, resolution independence, alpha blending, gradients and special visual filling,etc.

I actually looked into using DXScene / VGScene back over a year ago when I found myself, once again, thinking how outdated my Delphi GUI applications looked and how utterly annoying I found it that native executable Windows applications (in general), and the UI elements that made up these GUIs, did not SCALE easily when switching between various screen-sizes and pixel-densities, and that there was no easy way to give my applications the refinements to look more "modern" without purchasing a bunch of third-party controls. And, purchasing third-party controls to address the issue of modern "look" still did not resolve the issues with easy scaling of the UI-elements.

After looking at the KSDev stuff, I actually opted not to embrace their components for a few reasons. First and foremost (at the time) I considered them to be a high-risk "niche" component-set that I was unwilling to risk building mainstream applications for my customers with. I have seen all too many Delphi VCL component sets wither (think Rave Reports) and/or completely die-off over the years, and even if source-code is available, many component sets are just so specialized that it would take far too great of an investment to continue to use them in the event the developer "gave up" on them or failed to produce necessary bug-fixes and so on. KSDev had a neat thing going with their components, and thankfully Embarcadero has picked up that work and provided the credibility and reassurance I need to actually implement business-applications using that technology now in Delphi XE2!

Delphi XE2 FireMonkey: Is it Like X, Y, or Z?

This cross-platform application framework uses GPU-accelerated vector graphics to render UI elements, D2D/D3D (Direct-2D / Direct-3D) on Windows, and OpenGL on OSX. You can think of it as similar to Silverlight OOB or Jupiter (the new “application model” for Windows 8), or even Adobe Flash. When I consider the goals of Microsoft's "Jupiter", I have to wonder if perhaps FireMonkey is essentially the same thing... here is what a ZDNet article from early 2011 described Jupiter as:
Jupiter is going to be a new user interface (UI) library for Windows, built alongside Windows 8. It will be a thin XAML/UI layer on top of Windows application programming interfaces and frameworks for subsystems like graphics, text and input. The idea is Jupiter will bring support for smoother and more fluid animation, rich typography, and new media capabilities to Windows 8 devices.
Hmmmmm... sure sounds quite similar with regards to the end-result (the UI people see), though thankfully the FireMonkey implementation is not a pile of XAML and over-complexity that Microsoft always seems to come up with.

Instead, FireMonkey uses the familiar Delphi (object pascal) language and VCL (Visual Component Library) paradigm for its implementation, and compiles to native code. To me, being able to work with FireMonkey is just like working with any other VCL components, and honestly anything that keeps my from having to learn yet another Microsoft UI-technology-of-the-day is a plus (I just can not deal with XAML).

FireMonkey: Will Microsoft FUD Bury it Before it Takes Hold?

At least part of me is concerned that Microsoft will somehow work its usual FUD (Fear, Uncertainty, and Doubt) campaign against Embarcadero (with regards to FireMonkey) as they work feverishly to bring their own "Jupiter" vision and Windows-8 to market.

For all you LONG-TIME Delphi developers, do you remember how the Visual Basic vs. Delphi thing played out over a decade or more? Clearly Borland was (what should have been) light-years ahead in the RAD IDE and component-based Windows development tools/language space (especially in OOP that people could understand; unlike C++ which dominated mainstream "real" Windows apps prior to Delphi), but Microsoft worked very hard to convince developers (and corporate management) that investing in Delphi was a bad move... that Delphi was too risky,.. all the while working to "catch up" with Visual Basic and push that as the "solution" to corporate UI-development needs. I have used Delphi since version 2, and the fact is, Microsoft was not even remotely close to having anything as capable until perhaps the days of Delphi 2006.

As we all know in retrospect, Microsoft's strategy worked in a BIG way and only after burying Delphi and relegating it to the niche-market they fabricated through FUD did MS create a semi-decent (though wildly bloated) component set of DotNet and the reasonably nice C# language (which is clearly based on Delphi to some extent). I am concerned that somehow Microsoft will wage such a war again if they decide Embarcadero is a "threat"; or, do they even need to?

The fact is, Microsoft's decade+ campaign of marginalizing otherwise promising, and even superior, development languages and technologies has been so successful that Embarcadero has a monumental task ahead of it: convincing mainstream corporate developers to actually embrace this technology. Good luck with that!

There are so many "competing" priorities pulling at corporate IT-folks and budgets that I see this as a battle that is going the be VERY difficult to win without some serious willingness to put some flesh on the line and suck up some losses while doing "a Microsoft" and dumping the product out there en masse, and even at a potential loss, to foster widespread adoption so as to gain the all-important "critical mass" necessary to propel the product forward and create a self-sustaining win vs. a self-fulfilling-prophetic-loss situation (for lack of developer density, etc).

FireMonkey is also up against HTML5/JS hype, up against Silverlight/Flash and the forthcoming "Jupiter", and so many other competing technologies... how is it going to gain traction? When we (developers) search sites like Dice.com and see essentially ZERO postings for Delphi developer jobs (compared to oodles of C# or Silverlight or HTML5/JS jobs), what are we to do? 

Embarcadero best be thinking long-term and be willing to take a bit of "a hit" (financially) to gain a foothold, or the simple fact is: the niftiest technology in years for UI development may make little difference to market penetration and adoption. Get your marketing/sales team (especially the latter; since "marketing" is sales without responsibility for producing revenue) ramped up NOW Embarcadero, and have them start working some serious deals with software developers to get them to use this product! OK, enough said... on to more about this tech...

FireMonkey: Embrace it, and Embrace Changes to Your Existing Applications

FireMonkey is an entirely new framework for UI-development, and as such, it is incompatible with your current/traditional VCL-based UIs and you are not going to be having co-existence in the same application (i.e., if you want to port an existing application UI to the new FireMonkey technology, you will have to rewrite your GUI code).

This sure sounds a bit overwhelming, but I really think this gutsy move by Embarcadero is what will actually give FireMonkey a fighting chance — the technology is not encumbered by the burden of legacy support! This makes the implementation MUCH cleaner — we all can attest to how much we welcome the opportunity to write an application or component "from scratch" as compared to modifying a many-revision-old, widely used (and thus many possibilities to "break" something), piece of code. I expect this new code to be architecturally solid and much more ideal thanks to separating it from the older UI-VCL components.

FireMonkey Components are Containers

You will also have to get used to a bit of a paradigm shift with regard to how components are assembled, and I think it is another shift that is for the better and about time: FireMonkey components are all containers, meaning you can embed any component inside any other component. When you think about it, this makes total sense.

Something as simple as a button-component is composed by assembling 9 components that, when put together, produce what looks and behaves as a Button should. A FireMonkey Button consists of: a TLayout component to organize all control layout, (3) TRectangles for border, background and foreground color, a Label represents for the Button text, and then a group of four additional components (two each, for animation and effects).

The animations are going to give us the visual mouse over/out on the button (like we are used to seeing on websites for years), and the effects can occur on events like button-press, onfocus, etc and make even niftier things like "glow" effects happen and so on. This type of animation/effects ability is present throughout all of FireMonkey components thanks to the way these containers and component-buildups can be implemented.  I look forward to using this to "modernize" the look and feel of my applications, though we all need to keep in mind that this could be over-used quite easily.

You may also want to think about how to standardize the look/feel of your application elements, and thankfully FireMonkey implements what is a parallel to CSS Styles through their own FireMonkey "Styles". I am not yet sure how far these Styles can be pushed, but I am hopeful this first version is good enough for most things. I think about how CSS has gone through a lot of change as we push into CSS3 now, and I wonder if future iterations of Delphi XE3, XE4, etc will be extending the power of their own Styles just like how CSS keeps growing its abilities.

In some regards, these Delphi FireMonkey styles are quite a bit more advanced: you can implement things like blurs, animations, and so forth, via styles. Again I have some concern about pushing UI-glitz TOO far, but, no matter what, Styles should make standardizing and quickly updating the look-and-feel of applications a LOT easier!

Perhaps FireMonkey applications will be the advertising-force Embarcadero needs to gain further recognition: when users and developers start seeing native applications that are simply stunning, they may start to ask "what is that written in?" This could be a positive thing, but I also can imagine some applications getting so ridiculous with animating every last aspect of the UI that, when that previous question is asked, it will be with a bit of disdain or ridicule.

Hopefully we all use this power wisely :)


What about Non-Visual Components?

You may already be thinking: what about all my VCL components I use like TList, TStringList, etc.

Have no fear: these non-visual components will remain the same as what you are used to and will also be usable from your FireMonkey-based-UI applications. The fact is, if you have already done a decent job of separating your UI-implementations from the underlying event-code, database-interaction, and such, you may not have TOO difficult of a time updating your applications.

You are not going to have "data-aware" components like TDBMemo anymore under FireMonkey, but of course there is an alternative way of going about this. The new "LiveBindings" within the FireMonkey framework allow you to connect any type of data to any UI or graphical element in VCL / FireMonkey; consider this a mechanism for creating "live" relationships between objects and also between individual properties of objects. It has some serious potential!

I am excited by this feature, and look forward to seeing how far I can push these live-interrelationships. LiveBindings are going to allow you to do thing you can not do with existing data-aware controls too. And, LiveBindings are *not* just limited to FireMonkey controls (i.e., there is support for this technology in the "old" style VCL too as part of Delphi XE2 updates). You will be able to do things like bind the "Caption" property of a TLabel to the Field-values in a dataset (or the column-name, etc), and much more.

Since the "bindings" are accomplished using an expression-engine (vs. just simple hard-coded bindings) you can bind on evaluated-values. E.g., bind your label control's caption to an expression like TDBColumn.DisplayName + " column value is:" + dataset.field.valueAsString (pseudo-code used for example). You get the idea. It really is powerful.

But, that is not all... If you choose, you can implement bi-directional property-to-property bindings (which, sure sound like data-aware-like functionality). This bi-directionality implies something somewhat profound: it should be possible to consolidate UI-element-frameworks to no longer require those TDB...versions of each control (i.e., remove the need for "data-aware" versions of each control), since something like a Label can be bi-directionally "bound" and suddenly be that data-aware-control.

This is going to take some hands-on experience to get used to, but it is a significant step forward (and, should bring writing "data-aware" custom controls into the realm and reach of many more developers; I say this because I have always found writing TDBxyz data-aware custom components WAY too difficult!).

Note: I have read that the expression engine used by LiveBindings is available to us in our programs to evaluate any ObjectPascal expression dynamically at runtime; this should make for some interesting neat applications too!

Cross-Platform Native Applications using Delphi

OK, this topic certainly deserves some attention, especially from all of us that can still remember the days of Kylix, which was a nifty idea but one that failed miserably for all sorts of reasons (one being the simple fact it was not maintained at all after early releases). Well, with that memory pushed aside, let's think about the prospects of true cross-platform NATIVE-code deployment again.

As mentioned earlier, FireMonkey provides UI elements that will ultimately look the same across the various deployment targets: 32-bit Windows applications for Windows 7, Windows Vista and XP and Server OS's... 64-bit Windows applications for Windows 7, Windows Vista and XP; Server 2003 and 2008... and even Apple OS X 10.6 and 10.7 applications and iOS 4.2+ applications. In addition, there is some speculation that Android support and Linux will be forthcoming soon after the release of Delphi XE2 (hopefully as a free update!!)

The IDE Runs Only On Windows : but, you can deploy to other targets

It is not surprising that the Delphi XE2 RAD IDE runs only on Windows, though part of me wonders if Embarcadero will get around to converting the IDE to be FireMonkey-based (if even remotely possible?) and make the IDE run on any target-platform. Regardless, for now it is Windows only (as it always has been; aside from Kylix), and we developers will have to go through a few extra steps to compile and deploy applications to the Apple targets.

Delphi for Apple OSX/iOS

From what I have gathered via online discussions (I have not tested the Apple deployment stuff at all, nor do I have much initial concern for it even though long-term I expect to support Apple targets), Embarcadero / Delphi is apparently relying on the FreePascal Compiler (FPC) to compile code for deployment to other (non-Windows) target operating-systems. 

The FPC compiler will use the same source-code you have written for your Windows-based applications (that Delphi's compiler used to generate Windows binaries) and the FPC will generate binaries (i.e., native apps) that can be run an Apple / Mac computer and/or iOS device (i.e., single collection of source code yields multiple-platform-specific binaries, thanks to some FPC help).

There are also significant limitations with what all can be simply recompiled and deployed to the Mac. I am under the impression that outside of FireMonkey, substantial portions of the VCL will not be available on the Mac yet (I may be wrong). And, I really can not imagine some things EVER being supported on the iOS/OSX platform (especially some of the "native" database-access stuff).

You will certainly need to use FireMonkey for any UI you plan to have run on the Apple side of things, but in addition, I suspect there will be all sorts of other caveats regarding what will and will not "port" directly simply via a recompilation. Again, I see the Apple thing as a longer-term possibility for me. I'd be more intrigued with Linux deployment immediately (since I have Linux running in a Virtual Machine or two). Time will tell. I look forward to seeing what people are able to achieve on the Apple platform with Delphi XE2.

Delphi Applications for Cloud / Hosted Scenarios

I came across a sentence online somewhere stating that "Delphi and C++ applications can be deployed to Amazon EC2 and Windows Azure, with support for Amazon Simple Storage Service API, Queue Service, and SimpleDB." This is interesting, but I really do not know exactly what was needed to support this, and so far, I have not had the need for this.


Delphi XE2 64-Bit Support

Native 64-bit Windows applications are something that quite a few Delphi developers have clamored for over the past couple years, and apparently they are getting their wishes fulfilled. Delphi XE2 is to include support for Windows 64-bit machines, including a debugger and deployment manager. And, it looks rather easy to deploy an application as a 64-bit applicaition.

In the Delphi XE2 Project Explorer you will see a new node under each project where you can choose your "Target Platforms". By default, your existing projects are going to have a "target platform" entry that is ideal for deploying to 32-bit Windows. Next, you can add your new 64-bit Windows platform target node to the tree, select it, recompile, and voila!, you have a 64-bit executable.

I do not know how many developers really "need" 64-bit capabilities (like is necessary for addressing very large blocks of memory or working with 64-bit integers, etc), but now the capability is there and Delphi need not be considered lacking in this regard. You will have to do some (most likely) minor "code review" to make sure you to not have any code that is for some reason only 32-bit-safe; e.g., you are doing some bit-level manipulation shifting bits around in INTs, doing crazy things with pointers, etc. I do not expect most people will have significant work to do in this regard prior to 64-bit compiler and deployment use.


Delphi Reporting Components Update : Finally!
Goodbye Rave Reports (Junk!)

OK, I could not obtain 100% confirmation of this quite yet, but rumor has it that Delphi XE2 will include FastReport VCL 4 RAD Edition reporting tool — whether true or not, the fact is I refuse to invest ANY more time using Rave Reports (what a buggy pile of @#!@ that is an embarrassment that needed addressed; Nevrona's pathetic "support" and glacial pace of resolving any issues and bugs caused me and many others to become utterly fed up with the product and move elsewhere).

FastReports surely has to be a better option by a long-shot, as it is actively maintained and developed. Compare that to how Nevrona can not even update their *website* for years on end. I can not believe how long it took Embarcadero to move past Rave Reports... perhaps they made the stupid move (or Borland did) of signing some longer-term contract with Nevrona without any sort of "out" for them not meeting certain quality criteria, support criteria, etc. Who knows. But, I am excited by the prospect of having a good reporting tool (by default) included with Delphi!


Other New Features in Delphi XE2 Worth Noting

More details will emerge quite soon. In fact, I am supposed to listen to a Webinar about the product-launch tomorrow hopefully a near-term Delphi XE2 release-date is to be announced. And, I also hope the RTM (i.e., final, release-ready) version of Delphi XE2 is truly "ready" and not full of a bunch of annoying bugs.

My guess is that like most recent releases, there will be an update-pack available for it nearly as soon as it is officially "released"; hopefully it is solid enough to be truly prime-time ready. I have quite a few Delphi applications I want to update to take advantage of these new features ASAP.

I am not a big "DataSnap" user, but this release is supposed to have a fair amount of updates to that functionality. There are components and functionality related to that new "Cloud" stuff like TAzureQueueManagement, Amazon Simple Storage Service API, Amazon Queue Service API, Amazon SimpleDB API, and so on. I think the Documentation Insight is new (Delphi XML documentation tool) too.

Either way, there is a LOT of new stuff packed into this XE2 release, as already discussed. To me, the FireMonkey stuff alone is a HUGE chunk of functionality and makes me quite eager to start building some fantastic XE2-based applications leveraging these features.


Delphi XE2 — CONCLUSION: Enterprise Applications are Poised for a Major Update

As reported in this brief SD Times article and interview, Michael Swindell, senior vice president of product management for Embarcadero, seems to be clearly positioning Delphi XE2 and FireMonkey where I see it making the most sense: business applications:
“We know where we should be going with the experience of non-entertainment applications,” he said. [in reference to the fact that FireMonkey ships with about 200 user-interface controls that include GPU-powered scalable vector and 3D effects] 
[...] 
Swindell emphasized that FireMonkey is focused on heavy-duty business applications—not entertainment or advertising sectors, where rich Internet applications already are strong. To that end, FireMonkey introduces a feature called Live Binding, which lets developers bind any UI control or graphical element to any data source, he said. Native CPU application execution and data access allow FireMonkey applications to perform at a very high level, he added. 
[...] 
We saw this as a gap and as where applications need to go,” Swindell said. “Companies continue coming out with 1990s-style Windows Forms applications and rolling their own frameworks. There hadn’t been anything out of the box to get [developers] there quickly and with a lot of power.”
I could not agree more with that concluding quite about the "gap" that existed. Being a business software developer, I am ready to address that gap and use Delphi XE2 to do so.

Here's hoping Delphi XE2 / FireMonkey gains some widespread adoption and ushers in an age of resurgence in Delphi software development!