The efficient computation of high-compression encoding transforms is key to transmission of moderate- or high-resolution imagery along low- to moderate-bandwidth channels. Previous approaches to image compression have employed low-compression transforms for lossless encoding, as well as moderate compression for archival storage. Such algorithms are usually block-structured and thus tend to be amenable to computation on array processors, particularly embedded SIMD meshes. These architectures are important for fast processing of imagery obtained from airborne or underwater surveillance platforms, particularly in the case of underwater autonomous vehicles, which tend to be severely power-limited. Recent research in high-compression image encoding has yielded a variety of hierarchically structured transforms such as EPIC, SPIHT, or wavelet based compression algorithms, which unfortunately do not map efficiently to embedded parallel processors with small memory models. In response to this situation, the EBLAST transform was developed to facilitate transmission of underwater imagery along noisy, low-bandwidth acoustic communication channels. In this second part of a two-part series [1] is presented implementational issues and experimental results from the application of EBLAST to a database of underwater imagery, as well as to common reference images such as lena, baboon, etc. It is shown that the range of EBLAST compression ratios (100:1 < CR < 250:1) can be maintained with mean-squared-error (MSE) less than five percent of full greyscale range, with computational efficiency that facilitates video-rate compression with existing off-the-shelf technology at frame sizes of 512x512 pixels or less. Additional discussion pertains to postprocessing steps that can render an EBLAST-decompressed image more realistic visually, in support of human target cueing.