Tag Archives: Featured

TIBCO EMS 8 Fault Tolerance + NFS4

Introduction

Starting with my “CentOS 6 + EMS” setup, I need to obtain a decent FT/HA demo setup for test purposes.

My clients are often trying to improve EMS availability in case of a disaster or simple software error. Not because EMS itself is unstable, but mostly to manage risk related to MOM RollOver(or FT), HA and DR. TIBCO does provides guidelines on the subject, but not precise recommendation on the combination of OS and distributed FS to use. Additionally, TIBCO experts usually push for file stores over DB stores. They state that DB stores should not be considered for performance reasons, even if it strikes me as the simplest possible FT setup.

Many clients are then encouraged to create a software or hardware filesharing system to allow multi-site FT. This article describe how to implement such a solution FOR TEST PURPOSES using a FREE SOFTWARE distributed filesystem (NFS4 on CentOS 6). The other popular option is to go with specialized HARDWARE SAN solutions. For budget consideration, I wonder if I could create a performing enough setup without going in that expensive direction.

According to the official EMS documentation, the requirements for a distributed EMS Fault tolerance back-end file system are:

  • Write Order
  • Synchronous Write Persistence
  • Distributed File Locking
  • Unique Write Ownership

Would CentosOS 6+ NFS4 complies to all these rules ? I honestly don’t know. TIBCO does not provide a precise list of free/software or hardware storage solutions that precisely follow these guidelines. They point out that NFS4 could in some instance comply if all of the above principles are respected.

DISCLAIMER : If TIBCO cannot certify a OS/FS pairing for EMS FT, I certainly can’t either. PLEASE CONSIDER THE FOLLOWING HOW-TO AS A SUGGESTION OF TESTING CONFIGURATION. PLEASE DO NOT APPLY THIS SOLUTION IN A PRODUCTION ENVIRONMENT WITHOUT PROPER TESTING.

Side-line editorial : I believe the future is full of easily distributed MOMs, and that this kind of difficult setup will disappear over time. Currently, some MOM (IBM MQ, Rabbit MQ)don’t make FT  that hard to implement (but do not support WAN connections). In contrast, some MOM FT setup are similar to EMS (Active MQ)

To summarize, I aim to create something like this:

EMS FT goal
Latency will be introduced for testing purposes

NFS4 setup

As written above, start by go through my CentOS/EMS how-to and my CentOS firewall how-to.

Then, I suggest following up an article like this one from cyberciti.biz to setup NF4 and this one from digitalocean.com.

On my VM image, here is what it looked like:

yum install nfs-utils nfs4-acl-tools portmap
yum  - all packages are there !
It seems my CentOS 6 VM already had all these

Ok, nothing to do then… moving along to the stores sharing.
Let’s edit /etc/exports and add this:

vim /etc/exports

#add this line, to allow all host in the VirtualBox subnet (assuming you address is in 10.0.2.*, like mine)
/opt/cfgtibco/tibco/cfgmgmt/ems 10.0.2.0/24(rw,sync,no_root_squash)

Turn on the relevant services:

chkconfig nfs on 
service rpcbind start
service nfs start

NFS service starts

Just to be on the safe side, I explicitly disable NFS2 and NFS3 like mentioned here. In file /etc/sysconfig/nfs, I added (uncommented):

MOUNTD_NFS_V2="no"
MOUNTD_NFS_V3="no"
RPCNFSDARGS="-N 2 -N 3"

…and restarted

Firewall and VirtualBox ports updates

Now, let’s open the pertinent ports. If you refer to my firewall how-to, you know I use the iptables-restore command. to load my config. Edit the /root/iptables-save.txt, and add a line like this one:

-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT

Then, we validate and save:

iptables-restore < /root/iptables-save.txt
iptables -L
/sbin/service iptables save

EMS primary instance setup

The “Primary” machine is almost ready… the EMS server is still working (just start it to validate), and it is using the file stores locally, without NFS. We must modify the configuration slightly to allow the “Secondary” machine to share the JSON configuration file.

Note : This is really the big revolution of the new EMS configuration file format (JSON), in the past, each machine in the cluster had an almost identical “tibemsd.conf” system of files (some could be shared, but usually not the main one)… This made no sense since the vast majority of the data was the same for every machine in the cluster. Now the configuration file is shared, and includes slight difference when using FT. (EMS user guide, “Configuring Fault Tolerance in Central Administration”, page 539)

I use the EMS CA interface to change the relevant configuration:

Server Props
Server properties – NOTE : The IPs are wrong, see the section below.
FT propos
Fault tolerance properties – NOTE : The IPs are wrong, see the section below.

EMS secondary instance setup

To create the EMS “Secondary” machine, I suggest (all steps detailed below):

  • Creating a new script for the second EMS instance
  • Cloning the “Primary” VM
  • Validate Virtual Box IP attribution.
    • Correct configuration if necessary.
  • Deleting the EMS data folder and NFS export configuration on “Secondary” VM (config and stores)
    • Create a nfs client at the same location.

EMS secondary script

Before cloning, I built this script (tibemsd64-2.sh in EMS “bin” folder), and exposed it as a shortcut on the desktop:

cd /opt/tibco/ems/8.1/bin
./tibemsd64 -config "/opt/cfgtibco/tibco/cfgmgmt/ems/data/tibemsd.json" -secondary

Here is the shortcut command:

gnome-terminal -e "gnome-terminal -e /opt/tibco/ems/8.1/bin/tibemsd64-2.sh"

Reference  : (EMS User Guide, “Starting Fault Tolerant Server Pairs”, page 109)

“Primary” VM cloning

Thanks VirtualBox !
Thanks VirtualBox !

“Secondary” VM IP validation

When I first started my VMs, I realized they had the SAME IP address (10.0.2.15)… this is where I realized that I needed to switch the networking mode in the VMs for “NAT” to “NAT network”.

Changed in VirtualBox main "preferences".
Changed in VirtualBox main “preferences”.
To be changed on each VMs
To be changed on each VMs

As simple as that, but now sadly the IPs have changed ! I end up with 10.0.2.4 and 10.0.2.5.

Another side-effect… The port-forwarding settings (tutorial here) for each separate VMs are lost, they have to be re-implemented on the group itself. As such:

New port forwarding for NAT network
New port forwarding for NAT network

I have to adjust the JSON file by hand, since the EMS server and EMSCA processes won’t start without proper listeners.

Here is my updated JSON file:

{
	"acls":	[],
	"bridges":	[],
	"channels":	[],
	"durables":	[],
	"emsca":	{
		"advanced":	[],
		"appliance_options":	{
			"store_paths":	[]
		},
		"emsca_listens":	[{
				"url":	"tcp://10.0.2.4:7222"
			}, {
				"url":	"tcp://10.0.2.5:7222"
			}]
	},
	"factories":	[{
			"jndinames":	[],
			"name":	"ConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"generic",
			"url":	"tcp://7222"
		}, {
			"jndinames":	[],
			"name":	"FTConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"generic",
			"url":	"tcp://localhost:7222,tcp://localhost:7224"
		}, {
			"jndinames":	[],
			"name":	"SSLConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[],
				"ssl_verify_host":	false
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"generic",
			"url":	"ssl://7243"
		}, {
			"jndinames":	[],
			"name":	"GenericConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"generic",
			"url":	"tcp://7222"
		}, {
			"jndinames":	[],
			"name":	"TopicConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"topic",
			"url":	"tcp://7222"
		}, {
			"jndinames":	[],
			"name":	"QueueConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"queue",
			"url":	"tcp://7222"
		}, {
			"jndinames":	[],
			"name":	"FTTopicConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"topic",
			"url":	"tcp://localhost:7222,tcp://localhost:7224"
		}, {
			"jndinames":	[],
			"name":	"FTQueueConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[]
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"queue",
			"url":	"tcp://localhost:7222,tcp://localhost:7224"
		}, {
			"jndinames":	[],
			"name":	"SSLQueueConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[],
				"ssl_verify_host":	false
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"queue",
			"url":	"ssl://7243"
		}, {
			"jndinames":	[],
			"name":	"SSLTopicConnectionFactory",
			"ssl":	{
				"ssl_issuer_list":	[],
				"ssl_trusted_list":	[],
				"ssl_verify_host":	false
			},
			"ssl_issuer_list":	[],
			"ssl_trusted_list":	[],
			"type":	"topic",
			"url":	"ssl://7243"
		}],
	"groups":	[{
			"description":	"Administrators",
			"members":	[{
					"name":	"admin"
				}],
			"name":	"$admin"
		}],
	"model_version":	"1.0",
	"queues":	[{
			"name":	">"
		}, {
			"name":	"sample"
		}, {
			"name":	"queue.sample"
		}],
	"routes":	[{
			"name":	"EMS-SERVER2",
			"selectors":	[],
			"url":	"tcp://7022"
		}],
	"stores":	[{
			"file":	"meta.db",
			"file_crc":	false,
			"mode":	"async",
			"name":	"$sys.meta",
			"type":	"file"
		}, {
			"file":	"async-msgs.db",
			"file_crc":	false,
			"mode":	"async",
			"name":	"$sys.nonfailsafe",
			"type":	"file"
		}, {
			"file":	"sync-msgs.db",
			"file_crc":	false,
			"mode":	"sync",
			"name":	"$sys.failsafe",
			"type":	"file"
		}],
	"tibemsd":	{
		"authorization":	false,
		"console_trace":	null,
		"detailed_statistics":	"NONE",
		"flow_control":	false,
		"ft_activation":	null,
		"ft_active":	null,
		"ft_heartbeat":	null,
		"ft_reconnect_timeout":	null,
		"ft_ssl":	{
			"ssl_ciphers":	null,
			"ssl_expected_hostname":	null,
			"ssl_identity":	null,
			"ssl_issuer_list":	[],
			"ssl_password":	null,
			"ssl_private_key":	null,
			"ssl_trusted_list":	[],
			"ssl_verify_host":	null,
			"ssl_verify_hostname":	null
		},
		"jre_options":	[],
		"log_trace":	null,
		"logfile":	"/opt/cfgtibco/tibco/cfgmgmt/ems/data/datastore/logfile",
		"logfile_max_size":	null,
		"max_connections":	0,
		"max_msg_memory":	"512MB",
		"max_stat_memory":	"64MB",
		"msg_swapping":	true,
		"multicast":	false,
		"password":	null,
		"primary_listens":	[{
				"ft_active":	true,
				"url":	"tcp://10.0.2.4:7222"
			}],
		"rate_interval":	3,
		"routing":	false,
		"secondary_listens":	[{
				"url":	"tcp://10.0.2.5:7222",
				"ft_active":	true
			}],
		"server":	"EMS-SERVER",
		"server_rate_interval":	1,
		"ssl":	{
			"ssl_cert_user_specname":	"CERTIFICATE_USER",
			"ssl_dh_size":	null,
			"ssl_issuer_list":	[],
			"ssl_password":	null,
			"ssl_rand_egd":	null,
			"ssl_require_client_cert":	null,
			"ssl_server_ciphers":	null,
			"ssl_server_identity":	null,
			"ssl_server_key":	null,
			"ssl_trusted_list":	[],
			"ssl_use_cert_username":	null
		},
		"statistics":	true,
		"statistics_cleanup_interval":	30,
		"store":	"/opt/cfgtibco/tibco/cfgmgmt/ems/data/datastore",
		"tibrv_transports":	null,
		"track_correlation_ids":	null,
		"track_message_ids":	null
	},
	"tibrvcm":	[],
	"topics":	[{
			"name":	">"
		}, {
			"exporttransport":	"RV",
			"name":	"topic.sample.exported"
		}, {
			"importtransport":	"RV",
			"name":	"topic.sample.imported"
		}, {
			"name":	"sample"
		}, {
			"name":	"topic.sample"
		}],
	"transports":	[{
			"daemon":	null,
			"name":	"RV",
			"network":	null,
			"service":	null,
			"type":	"tibrv"
		}],
	"users":	[{
			"description":	"Administrator",
			"name":	"admin",
			"password":	null
		}, {
			"description":	"Main Server",
			"name":	"EMS-SERVER",
			"password":	null
		}, {
			"description":	"Route Server",
			"name":	"EMS-SERVER2",
			"password":	null
		}]
}

NFS client setup

On the “Secondary” VM :

  • Disable the NFS server by removing the export entries in /etc/exports and restart the service
  • Remove the local configuration and stores
    • rm -Rf /opt/cfgtibco/tibco/cfgmgmt/ems
  • Create a (test) NFS client
    • mkdir /opt/cfgtibco/tibco/cfgmgmt/ems
      # as root, we mount the "root" 
      mount -t nfs4 -v 10.0.2.4:/opt/cfgtibco/tibco/cfgmgmt/ems /opt/cfgtibco/tibco/cfgmgmt/ems
  •  Once it is established to work successfully, umount the NFS partition and add this line to /etc/fstab:
  • 10.0.2.4:/opt/cfgtibco/tibco/cfgmgmt/ems /opt/cfgtibco/tibco/cfgmgmt/ems   nfs    defaults 0 0

GEMS update

One thing remaining is to update GEMS (see first tutorial) configuration:

We enter the URLs like typical java clients: tcp://host1:port1,tcp://host2:port2
We enter the URLs like typical java clients: tcp://host1:port1,tcp://host2:port2

Then, to be sure, let’s restart both VMs, validate the automatic NFS link, and start both servers before starting GEMS:

...bingo ! 2 EMS servers in FT via NFS sharing !
…bingo ! 2 EMS servers in FT via NFS sharing !

I hope this testing rig will be useful to you !

Raspberry Pi – Video loop

Here is a nice week-end project, useful for automatic presentations, restaurant menus and waiting room video entertainment/marketing/information (my case).

I used a 32″ TV with a Raspberry Pi B (512 Mb RAM), a Wifi Adapter and a 64 Gb SD card (40 $CND at Costco) to create a simple video looping device.

My goal is simple, I want to use SFTP to upload videos in a folder, and have a simple Debian service loop and play each video file one by one.

Since I am not always here to restart the service, a simple reboot of the Pi (by unplugging) should restart the whole thing. No technical knowledge required !

Here is how I done it:

  1. Create nice corporate information slide deck
    • I suggest creating a visually interesting Powerpoint Presentation, with FULLY automated transitions and animation.
    • Test the presentation in Powerpoint.
    • Once it is perfectly tuned, export the Powerpoint to a MP4 video (sadly, only possible on Windows, not MAC OS)
  2. Setup the system
    • Install Raspbian with NOOBS on my 64Gb SD Card
    • Log-in with the pi user
    • If needed, setup wifi
      • type : startx
      • In LXDE, use the “Wifi” tool to setup the network
      • Exit LXDE
      • Reboot (sudo shutdown -r 0)
    • As a principle, I always upgrade the debian package and install vim (both are optional)
      • sudo aptitude upgrade
      • sudo aptitude install vim ctags
    • As we will use the CLI based Raspberry Pi Video player, omxplayer, there is no need for other software.
  3. Create the looping “application” (2 scripts and a folder)
    • Create a /home/pi/movies folder
    • Use your favorite SFTP client (ex: Mozilla) to push the corporate and other movie files to the Pi
    • Create the loop script (mine is named ~/declicTV/videoplayer.sh):
  4. mkdir /home/pi/declicTV
    cd /home/pi/declicTV
    vi videoplayer.sh

The code is greatly inspired from here.
I have made a few improvements:

#!/bin/sh
#first version from http://www.cenolan.com/2013/03/looping-video-playlist-omxplayer-raspberry-pi/

# set here the path to the directory containing your videos
VIDEOPATH="/home/pi/movies" 

# you can normally leave this alone
SERVICE="omxplayer"
SERVICE_OPTS="-o hdmi -n 3 -b"

# now for our infinite loop!
while true; do
        if ps ax | grep -v grep | grep $SERVICE > /dev/null
        then
        sleep 1;
else
        for file in $VIDEOPATH/*
        do
                #debug
                echo $SERVICE $SERVICE_OPTS "$file"
                
                #Display Files
                $SERVICE $SERVICE_OPTS "$file"
        done
fi
done
  • Then, it is a simple matter of adding a script to init.d, just like explained here.
    • Don’t forget the update-rc.d part.
    • Here is my init.d declicTV script:
#! /bin/sh
# /etc/init.d/declicTV
# taken from http://www.debian-administration.org/article/28/Making_scripts_run_at_boot_time_with_Debian

# Some things that run always
touch /var/lock/declicTV

# Carry out specific functions when asked to by the system
case "$1" in
  start)
    echo "Starting script declicTV "
    nohup /home/pi/declicTV/videoplayer.sh > /home/pi/declicTV/videoplayer.log &
    ;;
  stop)
    echo "Stopping script declicTV"
    killall -9 -r videoplayer.sh
    ;;
  *)
    echo "Usage: /etc/init.d/declicTV {start|stop}"
    exit 1
    ;;
esac

exit 0

I hope this will be helpful to you !

TIBCO NOW 2014 – Q&A

This is my first article on TIBCO NOW 2014.

Again this year, I was sent as a technology observer by my employer (Logimethods) to assess the new product line and business orientations of TIBCO.

This year was a great one.

The TIBCO folks have improved and refined their orientations on a handful of their core topics : Integration, Analytics, Data Management (second article on the way), and revamped their product line in a massive (good !) way (third article on the way).

Before I start writing on all these turn of events, I want to start by answering many of my colleagues & clients questions about the TIBCO product line. These were gathered in the weeks/days prior to the conference.

How many clients currently in production with BusinessWorks 6 ?

According to the product manager: Somewhere between 20 and 40.

What about the future of AMX Service Grid vs BusinessWorks 6 ?
Is BW6 the go-to product for SOA ?

An important TIBCO executive called Service Grid a “narrow field product”.
From a sales perspective, for new projects, BW6 is considered the way to go.

The whole conference was REALLY BusinessWorks 6 oriented, and all Hands-on activities were full. The product seems very well designed, and as been tested intensively (both inside and outside TIBCO) for a long time.
As I will detail (in another article), BW6 takes what is best in many TIBCO products, and delivers a platform that can do much of was Service Grid and BusinessWorks 5 were doing.
BW 6.2 coming out this month, and should be the new go-to version.

Service Grid will continue to be supported for a significant (undisclosed) time.
Note : Service Grid as recently been rebranded on the TIBCO site as a SOA Development product (read : not ESB)

What is the future of BWSE ?What of the migration from BWSE or BW5 to BW6 ?

If the BWSE was “unwired” (read : not Service or Reference SCA chevrons), then the conversion is as simple as with any BW5 projects.

Wired BWSE will be more work.
As of now, the claim made is that the migration tools cover about 95% of the migration work automatically.

When will TIBCO commit to a unified & modern administration console for all their product ?

NOW !

TEA is already the new administrator for BusinessWorks 6, but TIBCO have concrete plans to use it for all their products.

I have seen it used or had confirmation of support for : BusinessWorks 6, Business Events, EMS (read-only for now), Policy Director 2.0, Hawk, SPM , Security Server (a new component to be release with TEA 2.0)

There are rumours of BPM eventually making the jump too.

What are Vista’s plans for TIBCO after acquisition ?

We received a (rather good) sales pitch on Vista:
Vista is concentrating on acquiring good or great software companies, and making them even better.
They have been around for 14 years and never lost money on an acquisition.
Due to the size of their portfolio (soon to include TIBCO), they are the 6-7th bigger software firm on the planet.
30k employees in 50 countries.

Why they bought TIBCO:

  • Trusted partner with their customer
  • Mission critical PRODUCTS enabling client CRITICAL activity
  • Tech leadership focused on quality and innovation
  • Area for best practices adoption

Vista priorities:

  • Technological leadership… They plan to invest in TIBCO
  • Drive (read : push for) cloud enablement
  • Laser focus on : cloud integration, big data, internet of things
  • Continued enablement of fast data : provide an actionable opportunity for customer

What run-time will AMX BPM use in the future ?

AMX Service Grid for a foreseeable future

What should we expect about AMX 3.3 planned end of support?

No precise information gathered on the subject.

Any updates planned on policy director?If so, is it staying on the AMX Grid runtime ?

Policy Director 1.1 and 2.0 are coming in next quarter

1.1 Update is mainly about bug fixes and providing support for the BusinessWorks5 REST API plug-in (not Service Grid) .

Version 2.0 will be deployed on the TEA, and then gain the extra perk of being fully scriptable in Python.

Can Java / Java Spring applications be hosted on BW 6 as they were on AMX Service Grid ?

As I understood from BusinessWorks 6 experts, since it is now developed in an Eclipse platform, Java integration will be much better than in Designer.

Can’t wait to test it 😉

CentOS and VirtualBox NAT Port Forwarding

In my recent experiments, and while preparing for my next post, I stumbled on problems when trying to forward ports of my CentOS Guest VM to my Mac OS based browser. For some reason, even if Port forwarding was properly configured, like so :

Virtual Box Port NAT port forwarding on CentOS
Virtual Box Port NAT port forwarding on CentOS

…my Host system browser could not connect to services on my VM, with the exception of SSH. The solution was simple, CentOS comes with default netfilters rules built-in. These rules allow outgoing trafic, and incoming SSH requests only. The firewall configuration needed to be changed. The simplest way to do so is to disable the firewall completely:

# Run as root !
# Flush (discard) all rules
iptables -F
# Save configuration permanently
/sbin/service iptables save

While this may be very good for VM usage, I don’t like to disable a whole firewall (I feel unclean afterwards when I do). Here is a more sensible solution, that could be a starting point for a more solid security setup :

#Again... as root !
#See config
iptables -L
#save config
iptables-save > /root/iptables-save.txt
#edit config (see below for example)
vim /root/iptable-save.txt
# flush + load config
iptables-restore < /root/iptables-save.txt
# Validate
iptables -L
# Save for good
/sbin/service iptables save

For reference, here is my iptable-save.txt file:

# Generated by iptables-save v1.4.7 on Thu Jul 24 08:28:03 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [471:1082248]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT 
-A INPUT -p icmp -j ACCEPT 
-A INPUT -i lo -j ACCEPT 
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 7222 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 8777 -j ACCEPT 
-A INPUT -j REJECT --reject-with icmp-host-prohibited 
-A FORWARD -j REJECT --reject-with icmp-host-prohibited 
COMMIT
# Completed on Thu Jul 24 08:28:03 2014

It should work immediately, but to test the “iptables-restore”, you should reboot. Main reference: https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s1-iptables-saving.html http://wiki.centos.org/HowTos/Network/IPTables

Spreadsheet consolidation with Google Apps Script

Here is a demo I made to demonstrate the capacity of Google Drive, Spreadsheet and Google Apps Script to consolidate multiples Spreadsheet from the same Drive Folder, and rapidly create simple and powerful enterprise applications.

This demo:

  • Reads (an unknown number of) multiple files  from a single folder, using  predetermined rules (ex: files with name containing “Table “)
  • Consolidate number of hours per week, from all files
  • Write all the result values in a new spreadsheet

My (next) goal will be to create a simple share timesheet “webapp” for a small business. This demo is only the proof of concept. The goals of the future application will be to:

  • Leverage the current spreadsheet timesheet solution (keep the Excel file and/or make a similar replacement)
  • Completely remove the need for timesheet email sending, and copy/paste consolidation

The code of this article and all the files are available here.
(Copy into your Drive to edit)

Problem presentation

In a folder, we have an unknown number of spreadsheet containing similar information. There are specific markers in all source files. We  use these markers to determine where to start/stop the information extraction. In the image below, those markers are “WEEK” and “FINISH”. The information is not located at the same spot in all 3 files.

Example of source table file
Example of source table file

The general idea is to leverage on Drive Cloud qualities. In my target project (next article ?), the sources files will be disposed in a similar folder and correctly SHARED with all relevant personnel.

Just imagine. What a timesheet system could be under such principles :

  • An HR person (or group) can own the whole folder, and create Timesheets (annual, ideally) for all new employees.
  • Each employee have access to his file Only (and employees can delegate Timesheet filling for any reason)
  • There are no emails to send
  • There are no Excel incompatibilities (Windows vs Mac, xls vs xlsx, macro support, etc.), everybody use the latest version of the same spreadsheet client
  • The timesheet app can now have access easily to all of Google Apps Script APIs : Email notification, Spreadhseet validation, Calendar integration, JDBC, Web service calls, etc.

In light of all those advantages, and with the support of an easy to code (and debug) Javascript IDE, Google Apps Script will be the VBA of the next decade.

The Google Apps Script code

Here are the three script files describing the solution. They are all included in the “Consolidate” Google Script project (link above).

The specialized Object/Array

To store the consolidated information, I use a specialized Object/Array object. The code includes:

  • An initialization part (60 weeks reseted to 0 in the constructor)
  • An “Add” method which increment the associated week entry with the submitted number of hours
  • A “Report” method, which return all non-empty consolidated “lines” from the array
function ConsolidationArray(arrayName) {
  this.arrayName = arrayName;
  this.weekArray = [];
  
  //Array init
  for( i = 0 ; i <= MAX_NUMBER_WEEKS ; i++ ){
    //Any new attributes can be added here (ex : EmployeeType)
    this.weekArray[i] = {nbHours:0};
  }
}

ConsolidationArray.prototype.weekRowAdd = function(weekNumber, amount) {
  this.weekArray[weekNumber].nbHours += amount;
};

ConsolidationArray.prototype.returnConsolidatedNonEmptyWeeks = function() {
  var results = [];
  
  for( i = 0 ; i <= MAX_NUMBER_WEEKS ; i++ ){
    var iNbHours = this.weekArray[i].nbHours;
    if (iNbHours > 0){
      results.push({weekNumber:i,nbHours:iNbHours});
    }
  }
  
  return results;
};

The utility method

This is a simple “find” method for Spreadsheets. There are apparently no native way to search for a value inside a spreadsheet with GAS. Hence the necessity of such (lame) solution. The value of the SEARCH_RANGE_PARAMETER is currently 26 columns and 200 lines, and can be adjusted by constants in the main code file (next section). The bigger the range, the longer it will take to search.

// Based on http://stackoverflow.com/questions/10807936/how-do-i-search-google-spreadsheets

function find(ss, value) {
  
  var range = ss.getRange(SEARCH_RANGE_PERIMETER);
  
  var data = range.getValues();
  for (var i = 0; i < data.length; i++) {
    for (var j = 0; j < data[i].length; j++) {
      if (data[i][j] == value) {
        return range.getCell(i + 1, j + 1);
      }
    }
  }
  return null;
};

The main code

  • Determine the list of source files (all that contains “Table ” in name)
  • Open all files
    • Find the beginning and end markers
    • Extract the information in the specialized Array
  • Results are written from the array to another file (with unique filename based on timestamp)
//CONSTANTS
var SEARCH_RANGE_PERIMETER = "A1:Z200"
var MAX_NUMBER_WEEKS = 60;
var PARENT_FOLDER_ID = "0B46WfgcTStaBTkhxQ3M1WGozWWs";
var FILENAME_COMMON_EXP = "Table ";
var TOP_MARKER = "WEEK";
var CONSOLIDATION_HEADER_1 = "WEEK";
var CONSOLIDATION_HEADER_2 = "HOURS";
var BOTTOM_MARKER = "FINISH";

/*
Read all pertinent files, and use specialez "ConsolidationArray" object to create a report.
*/
function consolidate(){
  var arr = new ConsolidationArray("consolidation");
  
  //Variables
  var files2Read = [];
  
  Logger.log("Consolidate - START");
  
  // Get parent folder
  var currentFolder = DriveApp.getFolderById(PARENT_FOLDER_ID);
  Logger.log("Try to find \""+FILENAME_COMMON_EXP+"\" in folder : "+currentFolder.getName())
  var files = currentFolder.getFiles();

  // Create source file list
  while(files.hasNext()) {
    file = files.next();
    name = file.getName();
    id = file.getId();
    
    // Add only files matching the expression (non-regex)
    if(name.indexOf(FILENAME_COMMON_EXP)>=0){
      files2Read.push({id:id,name:name});
    }

  }
  Logger.log(files2Read.length + " files found !");
  
  // For all source files
  for (var i = 0; i < files2Read.length; i++) {
    // Open spreadsheet (id du membre i)
    var ss = SpreadsheetApp.openById(files2Read[i].id);
    Logger.log("---------File: "+files2Read[i].name);
    
    //Find top marker (and startLine + startColumn)
    var topCell = find(ss,TOP_MARKER);
    Logger.log("SEMAINE found at "+topCell.getColumn()+":"+topCell.getRow());
    var startLine = topCell.getRow()+1;
    var startCol = topCell.getColumn();
    //Find bottom marker (and endLine)
    var bottomCell = find(ss,BOTTOM_MARKER);
    Logger.log("FIN found at "+bottomCell.getColumn()+":"+bottomCell.getRow());
    var endLine = bottomCell.getRow()-1;
    Logger.log("Extraction will occur between Start:"+startLine+" Finish:"+endLine)
    
    //For each source line of current file
    for (var j = 0; j <= endLine - startLine; j++) {
      // Get values and use specialized array object to consolidate
      var weekNb = ss.getActiveSheet().getRange(startLine+j, startCol).getValue();
      var nbHours = ss.getActiveSheet().getRange(startLine+j, startCol+1).getValue();
      Logger.log("ADDING " + nbHours + " hours for week " + weekNb);
      arr.weekRowAdd(weekNb,nbHours);
    }
    
  }
  
  // Get all non empty array records
  var results = arr.returnConsolidatedNonEmptyWeeks();
  Logger.log("+++++RESULTS ");
  for ( i = 0 ; i < results.length ; i++ ){
     Logger.log("++"+CONSOLIDATION_HEADER_1+" "+results[i].weekNumber+" "+CONSOLIDATION_HEADER_2+" "+results[i].nbHours);
  }
  
  //Create report (new spreadsheet with unique name)
    
  // Create file
  var fileName = arr.arrayName+""+new Date().getTime();
  var ss = SpreadsheetApp.create(fileName);
  
  // Move the file from root to current folder
  var fileSS = DocsList.getFileById(ss.getId());
  var originalFolders = fileSS.getParents();
  var probableParent = originalFolders[0];
  var folder = DocsList.getFolderById(PARENT_FOLDER_ID);
  fileSS.addToFolder(folder);
  fileSS.removeFromFolder(probableParent);
  
  // Write content - Header
  ss.getRange("A1").setValue(CONSOLIDATION_HEADER_1);
  ss.getRange("B1").setValue(CONSOLIDATION_HEADER_2);
  
  // Write content - all results lines
  for ( i = 0 ; i < results.length ; i++ ){
    ss.getActiveSheet().getRange(i+2, 1).setValue(results[i].weekNumber);
    ss.getActiveSheet().getRange(i+2, 2).setValue(results[i].nbHours);
  }
  
  Logger.log("Consolidate - END");
};

Execution, Logging and results

To test the this project yourself:

  • Copy all 4 files (GAS Project + 3 spreadsheets) from the link to your own Drive account
  • Change the PARENT_FOLDER_ID to the ID of the destination folder where you copied the files
  • Run the “consolidate” method from “Code.gs”
  • To validate that the code ran correctly, display the log (ctrl-enter on Windows)
  • The log should look like this:
GAS project with Log window displayed
GAS project with Log window displayed

As a final validation, verify that a new spreadsheet has appeared in the same folder (it can take 3-4 seconds to do so in the Drive view). This new spreadsheet contains all the consolidated data from the three source files:

Capture d’écran 2014-08-13 à 08.19.49

That’s it ! I hope this example is useful to you somehow !

TIBCO EMS 8 on CentOS 6 VM

I needed to create a Virtual Box VM for my TIBCO experiments.

Here is how I created my base system, a CentOS VM with EMS 8.1.

This tutorial include all relevant scripts, and associated Gnome launchers.

Prerequisites

  1. Install VirtualBox and extensions on my laptop
  2. Create a CentOS 6.5 VM on virtual box (I chose “Minimal Desktop”)
    • (follow only the first section, the guest additions are better explained at step 4).
  3. Update the packages on CentOS
  4. Install the virtual box guest additions. (Great How-to here)
  5. Install a modern Oracle JDK. I installed version 7 (yum localinstall jdk-7u60-linux-x64.rpm).

Note: You could also take an existing image from a site like this one, but I prefer making my own.

On the target VM, I created only one non-root user (name=”user”).

I use the shared folder feature of Virtual Box to share file between the host and guest systems. If you choose this path, please make sure that the user you are using is in the group vboxsf, like so:

/etc/group file in vim editor
/etc/group file in vim editor

Install EMS

Nothing difficult here… a simple wizard based java installer. In my case, the tibco home is /opt/tibco. I specified /opt/cfgtibco as the configuration folder. In both cases, I had to create the folder with root and change the rights.

# /bin/bash
# As root !

cd /opt
mkdir tibco
mkdir cfgtibco
chown user tibco
chown user cfgtibco
chgrp user tibco
chgrp user cfgtibco
chmod 770 -R tibco
chmod 770 -R cfgtibco
Installer command
Installer command
TIBCO HOME setup
TIBCO HOME setup
Configuration folder setup
Configuration folder setup
Installation validation - Done !
Installation validation – Done !

Note: The installer did not work at first with the OpenJDK provided by CentOS. Hence the Oracle JDK prerequisite.

Note 2 : DO NOT try to reorganize the configuration folder. As will be seen below, EMSCA can regenerate and deliver a configuration. When it does, it take into account the original configuration folder.

Converting the configuration

I used the provided utility to convert the old configuration fileS to the single JSON config file required for EMSCA:

cd /opt/cfgtibco/tibco/cfgmgmt/ems/data
/opt/tibco/ems/8.1/bin/tibemsconf2json -conf tibemsd.conf -json tibemsd.json

Cleanup of old configuration (optional)

As they will not be used anymore, I like to move the “old-style” EMS configuration to another folder:

cd /opt/cfgtibco/tibco/cfgmgmt/ems/data
mkdir OLD_CONFIG
mv *.conf OLD_CONFIG/
mv *.txt OLD_CONFIG/
mv *.conf_template OLD_CONFIG/

Creation of “EMS JSON” launcher

To use the new configuration, the lauching script (in my case EMS_HOME/bin/tibemsd64.sh) must be adjusted:

cd /opt/tibco/ems/8.1/bin
./tibemsd64 -config "/opt/cfgtibco/tibco/cfgmgmt/ems/data/tibemsd.json"

Then, I create a launcher for the script :

gnome-terminal -e "gnome-terminal -e /opt/tibco/ems/8.1/bin/tibemsd64.sh"
Properties view of the desktop launcher
Properties view of the desktop launcher

Install EMSCA

EMSCA is better managed with a simple initiator properties files, such as this one:

com.tibco.emsca.data.dir=/opt/cfgtibco/tibco/cfgmgmt/ems/emsca_data
com.tibco.emsca.http.hostport=*:8080

Don’t forget to create the esmca_data folder:

cd /opt/cfgtibco/tibco/cfgmgmt/ems
mkdir emsca_data

The final setup should look like this:

EMS configuration directory structure
EMS configuration directory structure

The self contained web server can then be started with this command, which I decide to include in a launcher script on the desktop:

/opt/tibco/ems/8.1/bin/tibemsca --config /opt/cfgtibco/tibco/cfgmgmt/ems/data/emsca.properties

OR, as a Gnome shortcut:

gnome-terminal -e “/opt/tibco/ems/8.1/bin/tibemsca –config /opt/cfgtibco/tibco/cfgmgmt/ems/data/emsca.properties”

We then use the browser to connect…

Empty EMSCA
Empty EMSCA

At first, EMSCA is empty of any servers. We must “create” a server (connection)

9 localhost server creation

Then, it it a simple matter of following the server hyperlink to get in the main screen

NICE !
NICE !

Use EMSCA from your host system

To use EMSCA (or any other guest OS service) from your host system, follow this tutorial on port redirection with CentOS and VirtualBox.

EMS is nicer on Chrome in Mac OS !
EMS is nicer on Chrome in Mac OS !

Install Hermes JMS

1. Install a fresh Hermes with root (on Linux in my case)

Hermes as a simple Java installer wizard
Hermes as a simple Java installer wizard
Hermes installer is a simple "next,next,next" installer.
Hermes installer is a simple “next,next,next” installer.

2. Create a /home/USER/.hermes or /home/USER/hermes directory and copy the files from the /usr/local/HermesJMS/SOMETHING/cfg directory (again, only important on Linux)

mkdir ~/hermes
cp /usr/local/HermesJMS/cfg/* ~/hermes

Then I was able to start Hermes with /usr/local/HermesJMS/bin/hermes.sh

You can even a nice Gnome desktop launcher.
You can even a nice Gnome desktop launcher.

3. In Hermes, the first step is to get into the “Options-> Configuration…->Providers” section
4. Then, you can right-click to create a new provider. I named mine “EMS8.1”, but the value can be anything.

Pictured above : Not orignal naming.
Pictured above : Not orignal naming.

5. Again, use the right-click to select “add jars” and select EVERY JARS in the EMS_HOME/lib directory (use *.jar in the textbox to clean other files). Select the “Scan” option when asked.  Click “Apply” and close the configuration window with “OK”.

Use "*.jar" in the filename textbox to make it easier.
Use “*.jar” in the filename textbox to make it easier.
Hermes-Config-2
The end result should look like this.

6. The last step is to create a new Tibco EMS session (by right clicking on sessions in the main screen or using the menu bar). This usually looks a lot like this:

Could not be more simple...
Could not be more simple…

Everything is important here:

  • The first important selection is the “Loader”, which should match the provider we just created. This is critical for the rest of the form to work.
  • The TibjmsConnectionFactory is selected (see image above for complete class name), so Queues and Topics connections are possible.
  • The “Plug in” selection is also mandatory : choose “Tibco EMS”
  • Then, create the 3 required parameters in the Connection Factory section:
    • serverURL = tcp://HOST:PORT
    • userName = admin
    • userPassword = [empty in my case]

You can now right click on the session and try to discover the destinations (Topics & Queues). They will be added automagically if the connection is successful.

HermesJMS

Note: There may be a way to do some of this with JNDI, but I have not played with than yet.

Install GEMS

GEMS is a “native(TIBCO EMS APIs) and JMS based” administration and debugging tool. It is rough on the edges and unsupported by TIBCO, but it can also be more powerful than Hermes.
It is available for download here.(Original Article)

The installation is not complex. It consist of:

  • Unziping the file (I chose ~/Gems as the destination directory)
  • Slight adjustment of the startup script, like so:
#!/bin/sh

#Edit 1 : to allow script to be executed from anywhere
cd ~/Gems

#Edit 2 : Adjust EMS location
TIBEMS_ROOT=/opt/tibco/ems/8.1

TIBEMS_JAVA=${TIBEMS_ROOT}/lib
#Edit 3 : Change to jms-2.0.jar for EMS8 or higher
CLASSPATH=${TIBEMS_JAVA}/jms-2.0.jar:${TIBEMS_JAVA}/jndi.jar
CLASSPATH=Gems.jar:looks-2.3.1.jar:jcommon-1.0.16.jar:jfreechart-1.0.13.jar:${TIBEMS_JAVA}/tibjms.jar:${TIBEMS_JAVA}/tibcrypt.jar:${TIBEMS_JAVA}/tibjmsadmin.jar:${CLASSPATH}
CLASSPATH=${CLASSPATH}:${TIBEMS_JAVA}/slf4j-api-1.4.2.jar:${TIBEMS_JAVA}/slf4j-simple-1.4.2.jar
# echo ${CLASSPATH}
java -classpath ${CLASSPATH} -Xmx128m -DPlastic.defaultTheme=DesertBluer com.tibco.gems.Gems gems.props

Please note:

  • The “cd” in the beginning
  • The modification of TIBEMS_ROOT
  • The modification from jms.jar to jms-2.0.jar (EMS > 8.0 only)

The script can now be executed directly, or via launcher:

The cool icon is called gnome-gemsvt.png
The cool icon is called gnome-gemsvt.png
GEMS
GEMS just may be the most powerful graphical tool to configure and test EMS

TIBCO EMS Admin CLI

Last, but certainly not least, we add a launcher to the TIBCO EMS admin tool.

Note the "Application in terminal" option
Note the “Application in terminal” option
tibemsadmin(64) is crude, but powerful
tibemsadmin(64) is crude, but powerful

Conclusion

We now have a great testing VM, with administrative tools included !

The next steps could be to play with multiple instances of EMS and/or with SSL settings, or to explore EMSCA…

…then I’ll install more TIBCO products : BusinessWorks 6, BusinessEvents or others, and write more articles !

Done ! For now.
Done ! For now.

 

Display personal goals as QR Poster

I recently had this idea to display my life goals as a poster in my home office underground lair. The design is composed of 4 life goals and 16 proverbs, and displayed in a frame.
Here is the final result:

Please don't scan :-)

The project was relatively simple:

  • I used the QRStuff site to generate codes in both colors.
  • And then used Inkscape and it’s alignment features to create a .pdf
  • My local printing shop was kind enough to print it so it would fit in my Ikea frame

You, hour by hour

From the “Crontab-entry-that-is-actually-a-social-sciences-experiment” dept,
today we tinker an easy experiment with the integrated webcam of your [Note/Net]Book.

I stumbled upon this nice CLI utility : fswebcam !

Installation is easy on Debian/Ubuntu (don’t forget su or sudo) :

aptitude install fswebcam

Now that we have the program, let’s try a simple command:

fswebcam -D 5 /some/place/nice/somefile.jpeg

This command does 3 things :

  • Start the webcam
  • Wait 5 seconds
  • Take one picture and save it at the designated location

The second step is a little weirder (and maybe a little narcissist). Imagine we take a picture EVERY hour. For this purpose, we simply create a cron entry:

crontab -e

will bring your favorite editor, and allow you to edit the current user cron settings
Just add a line like this one, and ensure the destination directory exists:

# 0 * * * * fswebcam -D 5 /dest/dir/$(date +\%Y-\%m-\%d-\%H\%M\%S).jpeg

This line will trigger every day of every month, every day of the week at every hour and 0 minutes. The $(date +\%Y-\%m-\%d-\%H\%M\%S) part creates a different filename each time.

The results are very interesting. They allow you to observe yourself.

Webcam capture results
“Best” of me of the past 4 months

So far, I was able to observe:

  • The hours I look tired
  • The days I seem to work more
  • When I look more happy
  • How I position myself while working (strangely holding my head with my bent arm)

I can only assume the results are very personal, so I guess you would observe other traits yourselves.

Happy spying on yourself !

Small business network – Part1 – Requirements

Hi, welcome to this first post of the “Small business network” series.

Introduction

My girlfriend’ small business is moving to new headquarters. Yay.
She needs a whole new network. Great.

This entreprise is specialized in education services for early childhood. As one might imagine, such type of entreprise, especially since it existed for only five years, does NOT have the budget for a “professional” network (read : >10 thousand $ of hardware network products, expensive wiring and and automatic workstation management). This series of articles are about the next best thing, a “home made” network installation, built out of some FOSS, commercial software, hardware and (a little bit of) duck tape.

This first article is aimed at specifying the business needs. We then document the requirements and draw a plan of the network installation.

Needs – a short list

  • Server(s) : File sharing (internal), File sharing (secured extranet), Incremental backup, Authentication and Authorization, Accounting software service deployment
  • Workstations: 4-5 Desktops and 2-3 Laptops (all wired to the network). Theses are already belonging to the company. Typical usage include : email, productivity applications, web browsing.
  • Network printer/fax
  • 7 multi-line telephones (4 Lines). The technology implied is “normal” telephone lines. (see “VoIP shopping” bellow)

VoIP Shopping

In prevention of the “You should go with VoIP” argument, I should tell you that it was considered. The main problem, in the case of this company, is that it as an ongoing agreement with their current telephone provider (a local cable company). Cancelling would imply cost that VoIP savings just don’t cover.

Aside from this, the company had already invested in pretty good “4 lines” phones from RCA.  Those phones include some “telephone system” features like transfers and conference. Those feature are currently sufficient for the company. Software phones (for working from home), dynamic redirections and other features could be useful, but are not necessary.

More than half the number of phone needed were already bought in the years before, and this added to the cost of changing as well (the difference between buying 7 new phones or only 3).

That said, and considering all I just wrote, the VoIP offer from one of the provider we consulted represented almost enough savings to convince us.

VoIP, see you in 2 to 3 years 😉

In the meantime, it is important to state as a requirement that wiring *must* be forward-compatible to a time where the whole telephone network is going to be passing trough Ethernet.

Requirements

Since needs can be pretty basic when you don’t have anything in place, I have a preference for documenting the requirements as the checklist that I am going to use over the course of the whole project. Here it is.

New Headquarters network infrastructure list

  • Deploy a wired network for Workstations and telephone (see part 2)
    • Make it forward compatible for future VoIP. The 4 current lines will be carried by the same CAT5 cable that could be used for VoIP in the future.
    • Make the Workstation wiring CAT6 for new Gigabit Ethernet workstations
    • We need 13 identical faceplates with 1 Ethernet and 2 RJ-45 jacks (each holding two lines on the 4 central pins). Those 2 last should have been RJ-11 jacks, but my RJ-45 jacks were available and will do fine (even if 2 pairs of pins are going to be unused).
    • And one special faceplate with one Ethernet connector and the fax line
    • The closet (as in: replacement of a network rack) will include the following:
      • A 24 ports patch panel for each outlet (for only 13 outlets)
      • A distribution box of at least 49 output pairs (lines). I found a model with 50.
      • A router (generic Linksys 4 ports)
      • A 16 ports Gigabit switch (not intelligent. no budget for VLANs)
      • 2 cable company modems, for the 5 telephone lines (including fax)
      • A KVM for two servers
      • A monitor, keyboard and a mouse
      • Ventilation
  • Deploy servers (see part 3)
    • Install file sharing solutions including authentication and authorization of both local and external personnel
    • Install simply accounting (server mode)
    • Configure a incremental backup solution
  • Configure workstations (see part 4)
    • Install software (Productivity)
    • Establish a good way to to remote control of specific workstations from the outside

Documentation – Wiring plan

Wiring plan preview
Don’t you just love a useful document ?

The first “deliverable” in this series is a plan of the network drawn before the beginning of the wiring effort. This document is useful for many reasons:

  • It will serve as a guide for wiring (from the basement of the building)
  • It visually document the layouts of the different connectors. This will be useful in the future for managing the network closet (the plan should be as display in the closet)
  • The corresponding numbers can be used for workstation ID, phone ID or any other relevant purpose

I hope it will inspire you in your projects and encourage you to read on.

The plan is OpenOffice/LibreOffice compatible. I also provide the PDF version.
SBN-Part1-WiringPlan.odg
SBN-Part1-WiringPlan.pdf