Easily add parallelization to any (PERL) program

Here is the catch, suppose you have a little program that can naturally be parallelized. Suppose you have an array, and for each row of the array you want to execute a certain procedure but all the procedures can run in parallel. Here is a little PERL tidbit that can do just that. It spawns a thread per array row but in batches so you can keep tabs:-).
Imagine this working in tandem with memoize.
$par_tabs is a global that defines the maximum number of parallel tabs.

You need a sub per element like so:

sub per_entry_sub($$$$$){
     my ($game,$evtime,$href,$extraparam1,$extraparam2) = @_;
     ...
     ...
}

and of course you call the parallel excutor like so:

par_do_threaded(\&per_entry_sub,$extraparam1,$extraparam2,\%games);

The \&per_entry_sub is a reference to the sub itself and is used in the executor as
$func->(parameters);

 

 

Aaand here is the executor:

######################################################################
#
# Parallel Threaded execution, one thread per additional firefox TAB
#
######################################################################
sub par_do_threaded($$$$) {
    my ($func,$extraparam1,$extraparam2,$dataref)=@_;
    # dereference data hash
    my %data = %$dataref;
    #Now fetch each entry in parallel
    my $count=0;    
    print "*****************************************************\n";
    print "**                 ENTERING PARALLEL OPERATION     **\n";
    print "*****************************************************\n";
    my $numdata=scalar keys %data;
    print "NUMBER OF ENTRIES=$numdata\n";
    return if ($numdata<=0);
    my $entrynum=0;
    foreach my $key (sort keys %data) {
        $entrynum++;
        print "ENTRY NUMBER: $entrynum ENTRY $key\n";
        my $entry  = $key;
        my $href   = $data{$key}{'href'};
        my $evdate = $data{$key}{'date'};
        # do it in batches of $par_tabs
        if ( $count < ($par_tabs - 1) ) { $count++; my $tid=threads->create($func,$entry,$evdate,$href,$extraparam1,$extraparam2)->detach();
            print "THREAD $tid created\n";
        } else {
            # This is the linear execution part of the code
            $func->($entry,$evdate,$href,$extraparam1,$extraparam2);
            $count=0;
        }
    }
}

Docker testing without compose

Sometimes using docker-compose during testing can be a hassle. Use the following little magic to spawn and test multiple instances.
Your mileage will vary.

#!/bin/bash

#
# A semi automated way to launch multiple docker containers and test your apps parallelization
# replace the creation of index.php with your favorite git cloned code
# angelos@unix.gr
#

DOCKER=”sudo docker”
IMAGE=worker
DOCKERHOST=localhost:3000

function build {
echo ‘ ‘ > index.php

echo ‘
FROM centos:centos6
EXPOSE 80
RUN yum -y update && \
yum -y install epel-release && \
yum -y install mod_php
ENTRYPOINT [“/usr/sbin/apachectl”, “-D”, “FOREGROUND”]
‘ > Dockerfile

echo “Building Master Image”
$DOCKER build . 2>&1 | tee build.log
id=`grep ‘Successfully built’ build.log | cut -d” ” -f 3`
if [ “X${id}” == “X” ]
then
echo “build failed”
exit 1
fi

$DOCKER tag -f ${id} $IMAGE
}

function runem {
for instance in `seq 1 $1`
do
# might give an error
$DOCKER rm worker-instance${instance} >& /dev/null

hash=`$DOCKER run -d \
-p $((80+${instance})):80 \
-h worker-instance${instance} –name=worker-instance${instance} \
$IMAGE`
done

echo “======================= Docker Images ======================”
$DOCKER ps
}

function killem {
INSTANCES=`$DOCKER ps | grep worker-instance | awk ‘{print $11}’`
for instance in $INSTANCES
do
$DOCKER kill ${instance} && $DOCKER rm ${instance}
done
}

function checkem {
INSTANCES=`$DOCKER ps| grep worker-instance | wc -l`
if [ $INSTANCES -le 0 ]
then
echo “[ERROR] No Instances found ”
exit 1
fi
> usage.txt
> processes.txt
for instance in `seq 1 $INSTANCES`
do
curl -s http://localhost:$((80+${instance}))| grep worker-instance${instance} >& /dev/null
if [ $? -ne 0 ]
then
echo “Instance ${instance} is not healthy”
else
echo “Instance ${instance} is fine”
fi

echo worker-instance${instance} >> processes.txt
$DOCKER exec worker-instance${instance} ps aux >> processes.txt 2>&1

echo worker-instance${instance} >> usage.txt
$DOCKER exec worker-instance${instance} w >> usage.txt 2>&1
done

echo Process list is in processes.txt , mem/cpu usage in usage.txt
}

function remote {
curl -s http://localhost:2376/containers/worker-instance${1}/stats?stream=false | sed -e ‘s/[{}]/”/g’ | awk -v RS=’,”‘ -F: ‘{print $1 ” ” $2}’ | sed -e ‘s/\”//g’
}

function usage {
echo ”
Usage:
[build image] ./$0 -b
[run image instances] ./$0 -r
[delete image] ./$0 -d
[kill running instances] ./$0 -k
[check instances] ./$0 -c
[check instance via remote api] ./$0 -a

}

while getopts “:r:a:cbk” opt; do
echo “$opt was triggered, Parameter: $OPTARG” >&2
case $opt in
a)
remote $OPTARG
;;
d)
docker rmi -f $IMAGE
;;
k)
echo “Killing Instances”
killem
;;
r)
runem $OPTARG
;;
b)
build
;;
c)
checkem
;;
\?)
usage
exit 1
;;
🙂
echo “Option -$OPTARG requires an argument.” >&2
exit 1
;;
esac
done

Poor man’s xml parser for jenkins config

Angry jenkins is a nice tool but its users’ config is in XML. So what happens if on a production server one cannot install an XML parsing lib ?

Here is a little ditty to list users by role with the help of stingy global multiline regexp matching.

 

#!/usr/bin/perl
 $files=`ls /space/jenkins_instances/*/config.xml`;

print " JENKINS USERS by ROLE\n";

@files=split('\n',$files);

foreach my $file (@files) {
 my @parts=split('/',$file);
 my $client=$parts[3];
 print $client ."=> " . $file ."\n";
 print "-" x 78 . "\n";

my $contents="";
 open($FILE,"<",$file) || die "Cannot read file $1";
 while(<$FILE>) {
 $contents .= $_;
 }
 close($FILE);
 while ($contents =~ m/<role name="(.*?)" pattern="\.\*">(.*?)<\/role>/gsm ) {
 $role=$1;
 $perms_ids=$2;
 print $role .":";
$perms_ids=~ m/<assignedSIDs>(.*)<\/assignedSIDs>/sm;
$ids=$1;
while ($ids=~ m/<sid>(.*?)<\/sid>/gsm ) {
 print "\t". $1 ."\n";
 }
 print "\n";
}
 print "\n\n";
}

 

Jfrog artifactory speed-up foo

Here is a quick win for jfrog's artifactory behind an apache web server using mod_ajp

# Compression
######################################################################
  SetOutputFilter DEFLATE
  AddOutputFilterByType DEFLATE text/html text/plain text/xml text/x-js text/javascript text/css
  AddOutputFilterByType DEFLATE application/xml application/xhtml+xml application/x-javascript application/javascript
  AddOutputFilterByType DEFLATE application/json


  BrowserMatch ^Mozilla/4 gzip-only-text/html
  BrowserMatch ^Mozilla/4\.0[678] no-gzip
  BrowserMatch \bMSIE !no-gzip !gzip-only-text/html



  # Don't compress images and binary artifacts
  SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary
  SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|bz2|sit|rar|jar)$ no-gzip dont-vary
  SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary

  # Enable only to verify operation
  #DeflateFilterNote ratio
  #LogFormat '"%r" %b (%{ratio}n) "%{User-agent}i"' deflate
  #CustomLog /var/log/httpd/deflate_log deflate
  ######################################################################

# Caching Override
# Set up 2 Hour caching on commonly updated fil
#################################################################################### ExpiresActive On ExpiresDefault A9200 Header append Cache-Control "proxy-revalidate" ####################################################################################

Swiss Dormant Accounts Scraper

It has been quite a while since I have used PERL and this dormant accounts’ web page seemed like a challenge. Since I am always up for a good challenge, here is the solution in good old fashioned PERL.

 


#!/usr/bin/perl 
#
# Get Rich, Dump stuff from Swiss banks angelos karageorgiou angelos@unix.gr
# 
use HTML::Form;
use WWW::Mechanize;
use HTML::Parser ();
use Data::Dumper;
use HTML::TableExtract;

my $mech = WWW::Mechanize->new();
my $url='https://www.dormantaccounts.ch/narilo/';

# first into the page, click on Publications
$mech->get( $url );
$mech->form_number(2);
$mech->click();

my $html = $mech->content();
dump_table($html);

my $cont=1;
while ($cont) {
    print "-" x 80 ."\n";
    $cont=0;
    my $form=$mech->form_number(2);
    my $saved_input=undef;
    foreach $input ($form->inputs) {
        if ($input->value eq 'Next') {
            $saved_input=$input;
            $cont=1;
        }
    }
    # just in case
    $mech->click_button( input => $saved_input);
    dump_table($mech->content());
}

sub dump_table () {
 my $html=shift;
 $te = HTML::TableExtract->new( );
 $te->parse($html);

 # Examine all matching tables
 foreach $ts ($te->tables) {
   next if $ts->coords == "0,0";
   foreach $row ($ts->rows) {
        foreach $col (@$row) {
            $col=~s/[\s][\s]*/ /g;
            print "'$col' ;";
        }
    print "\n";
   }
 }
}

 

 

Saltstack script testing using docker

There are literally thousands of ways for a saltstack  script to go awry! One simple way to test these scripts is to create a single salt-master many salt-minions autmated testing rig.

First one needs a build script


 

#!/bin/bash
#
# build the docker images and start the test environment
# angelos@unix.gr
#
echo "Cleaning Up"
docker rm -f salt-master
for minion in 1 2 3
do
 docker rm -f salt-minion${minion}
done
echo "Building Master Image"
echo "Copying my ssh credentials to be used for git"
mkdir -p creds 2> /dev/null
cp $HOME/.ssh/id_rsa creds/
cp $HOME/.ssh/known_hosts creds/
cp $HOME/.gitconfig creds/gitconfig
docker build . 2>&1 | tee master.log
id=`grep 'Successfully built' master.log | cut -d" " -f 3`
if [ "X${id}" == "X" ]
then
 echo "Salt Master build failed"
 exit 1
fi
docker run -d \
 -h salt-master --name salt-master \
 -v $PWD/bath:/srv/salt \
 -v $PWD/pillar:/srv/pillar \
 --memory-swappiness=1 $id
docker tag -f ${id} salt-master
echo "Building Minion image"
docker build -f Dockerfile.minion . 2>&1 | tee minion.log
id=`grep 'Successfully built' minion.log | cut -d" " -f 3`
if [ "X${id}" == "X" ]
then
 echo "Salt Minion build failed"
 exit 1
fi
for minion in 1 2 3
do
 hash=`docker run -d \
 -h salt-minion${minion} --name=salt-minion${minion} \
 --link salt-master \
 --memory-swappiness=1 $id`
 docker tag -f ${id} salt-minion${minion}
done
docker ps

The big trick now is not so much the Dockerfiles that create the relevant docker images as much as each image’s startup script

Here is the salt-master’s docker CMD script


~/saltstack (master *% u=)$ cat master-start.sh 
#!/bin/bash

service rsyslog start
service salt-minion start
service salt-master start
service sshd start
echo "Sleeping a bit: 15 secs"
sleep 15
Echo "Auto-accepting All minion keys"
salt-key -A -y
echo "Going into infinity"
sleep infinity;true

Here is the salt-minions’ docker CMD script


 

~/saltstack (master *% u=)$ cat minion-start.sh 
#!/bin/bash
service rsyslog start
salt-minion --daemon --log-level debug
echo "Going into infinity"
sleep infinity;true

 

The above couple of almost brain dead scripts  will allow you to create an automated test platform for saltstack scripts.

 

 

 

 

SaltStack: encrypted VSphere Credentials

I got the idea from  https://clinta.github.io/random-local-passwords/

It integrates GPG GIT and SALT for relatively secure centralized credentials storage
You mileage will vary

Here is the diff to allow for storage of credentials in encrypted form for cloud.providers


*** /usr/lib/python2.6/site-packages/salt/cloud/clouds/vmware.py.old 2015-12-15 11:47:05.703214983 +0000
--- /usr/lib/python2.6/site-packages/salt/cloud/clouds/vmware.py 2015-12-15 12:56:18.067154711 +0000
***************
*** 67,72 ****
--- 67,73 ----
 import time
 import os.path
 import subprocess
+ import re
 
 # Import salt libs
 import salt.utils
***************
*** 197,202 ****
--- 198,212 ----
 port = config.get_cloud_config_value(
 'port', get_configured_provider(), __opts__, search_global=False, default=443
 )
+ ext_auth_method = config.get_cloud_config_value(
+ 'password_program', get_configured_provider(), __opts__, search_global=False, default=''
+ )
+ pw_store = config.get_cloud_config_value(
+ 'pw_store', get_configured_provider(), __opts__, search_global=False, default='/opt/passdb'
+ )
+ 
+ if ext_auth_method=='pass':
+ password=_get_pw_from_pass(username,pw_store)
 
 return salt.utils.vmware.get_service_instance(url,
 username,
***************
*** 3569,3571 ****
--- 3579,3605 ----
 return False
 
 return {datastore_cluster_name: 'created'}
+ 
+ 
+ 
+ def _get_pw_from_pass(pw_name, pw_store):
+     '''
+     Get a password, from pass utility ( GPG must be active) remember to patch pass with gpg secret
+     '''
+     my_env = os.environ
+     my_env["PASSWORD_STORE_DIR"] = pw_store
+ 
+     # synchonize first
+     devnull = open(os.devnull, 'w')
+     subprocess.call(['/usr/bin/pass','git', 'pull'],env=my_env,cwd=pw_store,stdout=devnull, stderr=devnull)
+ 
+     pw_file = '{0}/{1}.gpg'.format(pw_store, pw_name)
+     log.info("trying to get pass from '{0}'".format(pw_file))
+     if os.path.isfile(pw_file):
+         log.info("trying to get pass for '{0}'".format(pw_name))
+         proc = subprocess.Popen(["/usr/bin/pass",  pw_name], env=my_env,cwd=pw_store,stdout=subprocess.PIPE)
+         pass_plaintext = proc.stdout.readline().rstrip()
+         return pass_plaintext
+     else:
+         log.info("GPGed password file not found '{0}'".format(pw_file))
+ 
+     return 'Pass Not Found'
+ 

ΣΚΟΡΠΙΕΣ ΣΚΕΨΕΙΣ ΓΙΑ ΤΗΝ ΟΙΚΟΝΟΜΙΑ ΤΩΝ ΕΛΛΗΝΩΝ

Οικονομικα 101

Κατ´ αρχήν ένα μάθημα οικονομικών:Οι κυβερνησεις πρεπει να δημιουργουν δουλειες κανοντας και λιγο πλατες σε επιχειρηματιες. Οi Αντιπολιτευσεις πρεπει να ελεγχουν τους προηγούμενους να μην ξεφεύγουν. η στιγμη που κυβερνηση/αντιπολιτευση μπορει να αλλαξουν θεση οποιαδηποτε στιγμη, _ΠΡΕΠΕΙ_ ολοι τους
να εχουν μια ηθικη βαση πανω στην οποια να μπορουν να στηριχτουν. Εκει ειναι η συγχρονη Ελληνικη ελλειψη. Βαφτιστε τωρα δεξια/αριστερα οτι θετε, δεν εχει νόημα πλέον.

Τώρα το μεγάλο ζήτημα στην Ελλάδα ειναι οι δουλειες.

Τη στιγμή που η Ελληνική οικονομία δεν μπορεί να τις δημιουργήσει μόνη της, χρειάζεται επενδύσεις απο έξω. Με τη τρέχουσα πολιτική κατάσταση και ανεξάρτητα απο την έκβαση του δημοψηφίσματος, αυτες οι επενδύσεις δεν θα έρθουν.

Κανένας λογικός άνθρωπος δεν πρόκειται να ρήξει τα λεφτά του στην Ελλάδα τη στιγμή που ποτέ δεν θα ξέρει σε τι νόμους περι εφορίας θα ξυπνησει ουτε τι θα έχει να αντιμετωπίσει ως αντιπάλους, βλέπε το κτήνος της λαϊκής γραφειοκρατιας. Ergo, οι μόνες επνδύσεις που θα γίνουν θα είναι απο το δημόσιο για δημόσιους λόγους.  Αυτος ειναι και ο αργος θανατος του ιδιωτικου τομεα τον οποίο και θα ζήσουμε. Οποιος δεν εχει βολευτει με κάποια ρύθμιση θα ζει απο κρατική ελεημοσύνη / ταμείο ανεργείας.

Κατ’ αυτην την έννοια οι κρατιστές νίκησαν κατά κράτος και ολοι εμείς και τα παιδιά μας χάσαμε… Και ο τουρισμος να ξανανθίσει, η ζημια που έγινε ειναι εξοντωτική για μια ολοκληρη γενια τουλαχιστον.
Το μονο στο οποιο εχουν να ελπιζουν τα παιδια μας ειναι μια ηρωϊκή έξοδος απο την χώρα.
Τελικα ας παραδεχτουμε επιτελους πως ο Λαος μας ειναι ανατολιτης.

Ψηφιζει τους σεϊχηδες του, τους φιλάει το χέρι, δουλευει δια του μπακσισίου και του φταίει πάντα η Δύση. Δεν υπαρχει περιπτωση με τις εκλογες να αλλαξουν οι πολιτικοι, ειδικα αφου ειναι μελη στον ιδιο συλλογο αλληλοεκτιμησης (Βουλη).

Η δημοκρατια μας, δηλαδη η βασικη παιδεία μας, ο εαυτός μας τελικά  χρειαζεται μια ριζικη επανεξεταση.

A shroud over the mind

On the knowledge economy of the Internet era.

Whereas I am a great proponent of technology and the internet as a collaboration medium I have come to realize that its effect is that of a shroud over the mind.

The sheer amount of data / quotes / papers available, makes us think that we indeed live in the knowledge era but this is not so. The data presented are only skin deep and there is rarely any way to breach the surface tension of the shroud and dive into the deep associations that make this data surface.

As such I am more apt to declare that the true knowledge based society was that of the Renaissance when people really tried to understand the mechanics of the world, whereas nowadays any fake authority drives people to make un-researched assumptions. I therefore propose a balanced approach to the so called “knowledge” economy: let’s delve back into simple things such as fine literature which teaches subtly and more deeply than any projector presentation.

Then we can use the Internet to its true and full potential, already armed with a critical mind: The Net is a powerful yet ultimately dangerous weapon, it is a substitute for what makes us truly human: our thinking. It is not accidental that there is an synchronous raise in number of PhDs   who are functional illiterates at the same time.

Harq’ al’Ada

I have been listening and reading all sorts of proposals on what Greece needs so that it can recover from its endemic recession. Many politicians and economics scientists, including the notorious Varoufakis, propose this recipe and that playbook that has worked in the past in this country or that region.

Quite frankly everything is going to fail for a single reason: Greece refuses to change the habits that are pushing towards the same fate as Atlantis. The country is rife with cronyism and sub educated supernumerary public sector employees to point out the most glaring problems.

What Greece needs is what the venerated Frank Herbert wrote throughout his work: the “Harq al’Ada”: the breaking of the habit. It is the same meme that Marvin Minsky denotes when he says: “Try to surprise yourself by the way you think today”. The old ways and the stale ancestor worshiping must end.

All that we Greeks knew, all that we know, is useless. We have to adapt or we will be overcome by our very own deficiencies  that is the gist of the 20th century and we are already treading water in the 21st. Adapting means changing fundamentally, in habits as well as values and I am calling on all intelligent people to get involved or Atlantis will no longer by an ancient myth.