Introduction
ECC is an instance within the CloudVeneto cloud infrastructure. To access the ECC cloud, you must first request permission through CloudVeneto. Once submitted, the request will be reviewed and approved by ECC administrators.
Access request
In order to request access to ECC, you must have an account in the Horizon OpenStack web server interface, as outlined in the CloudVeneto User Guide. Follow these steps to create your account:
- Open https://cloudveneto.ict.unipd.it/dashboard in your browser and click the Register button.
- Log in with your UniPD Single Sign-On account (i.e. an email address ending in @unipd.it or @studenti.unipd.it) by clicking on the UniPD logo to start the enrollment procedure.
- in the User Registration form, fill in all required fields and select ELIXIR Compute Cloud as the existing project.
- You will receive an email with the Acceptable Use Policy (AUP) for the ECC infrastructure. Accept the AUP terms to complete the submission of your access request.
Once submitted, your request will be forwarded to the ECC admin for evaluation. You will receive an email notifying you whether your request has been approved or denied.
If approved, you will receive two emails:
- The first email from CloudVeneto support will include credentials to access the CloudVeneto Gate machine (cv_user and cv_pass).
- The second email from ECC admins will include credentials to access the ECC headnode (ecc_user, ecc_pass, headnode_ip).
Connect to the ECC
You can access ECC via Secure Shell Protocol (SSH) using the credentials provided in the emails from CloudVeneto and ECC admins. For a smoother connection experience, it is recommended to use SSH key-based authentication instead of password authentication.
To create SSH keys for authentication and use them to connect to the CloudVeneto Gate, run the following command:
# generate a new key pair locally (preferably with passphrase)
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_ecc
# copy the public key to CloudVeneto Gate machine (it will ask for cv_pass)
cat ~/.ssh/id_ed25519_ecc.pub | \
ssh cv_user@gate.cloudveneto.it 'cat >id_ed25519_ecc.pub && \
mkdir -p .ssh && \
chmod 700 .ssh && \
mv id_ed25519_ecc.pub .ssh/id_ed25519_ecc.pub && \
cat .ssh/id_ed25519_ecc.pub >>.ssh/authorized_keys'
# copy the private key to CloudVeneto Gate machine (it will ask for cv_pass)
cat ~/.ssh/id_ed25519_ecc | \
ssh cv_user@gate.cloudveneto.it \
'cat >.ssh/id_ed25519_ecc && chmod 600 .ssh/id_ed25519_ecc'
# connect to CloudVeneto Gate machine (it will ask for SSH key passphrase, if used)
ssh -i ~/.ssh/id_ed25519_ecc cv_user@gate.cloudveneto.it
# copy the public key from the Gate machine to ECC headnode (it will ask for ecc_pass)
cat ~/.ssh/id_ed25519_ecc.pub | \
ssh ecc_user@headnode_ip 'cat >.ssh/id_ed25519_ecc.pub && \
cat .ssh/id_ed25519_ecc.pub >>.ssh/authorized_keys'
# test connection to ECC from Gate machine (it will ask for SSH passphrase, if used)
ssh -i ~/.ssh/id_ed25519_ecc ecc_user@headnode_ip
exit
Accessing ECC from your local machine requires proxying the SSH connection through the CloudVeneto Gate. You can achieve this by using the following SSH command:
# (optionally) add key to ssh-agent (it may ask for SSH key passphrase)
ssh-add ~/.ssh/id_ed25519_ecc
# connect to ECC via proxy
ssh -i ~/.ssh/id_ed25519_ecc \
-o StrictHostKeyChecking=accept-new \
-o ProxyCommand="ssh -i ~/.ssh/id_ed25519_ecc \
-W %h:%p cv_user@gate.cloudveneto.it" \
ecc_user@headnode_ip
You can simplify the SSH connection to ECC by configuring your SSH config file:
# update ssh config with proxy and headnode
cat <<EOF | tee -a ~/.ssh/config
Host cvgate
HostName gate.cloudveneto.it
User cv_user
IdentityFile ~/.ssh/id_ed25519_ecc
Host ecc
HostName headnode_ip
User ecc_user
IdentityFile ~/.ssh/id_ed25519_ecc
UserKnownHostsFile /dev/null
StrictHostKeyChecking=accept-new
ProxyJump cvgate
EOF
# connect to ECC
ssh ecc
# copy files to and from ECC with scp
scp localdir/file ecc:remotedir/
scp ecc:remotedir/file localdir/
# or rsync
rsync -ahv localdir/ ecc:remotedir/
rsync -ahv ecc:remotedir/ localdir/
Job submission example
The following example submits a dummy CPU-intensive task using two threads in parallel to the Slurm scheduler:
# create work folder
mkdir slurm-test && cd slurm-test
# create simple.sh worker script
cat <<'EOF' | tee simple.sh
#!/bin/bash
#SBATCH -J simplejob
#SBATCH -o "%x"."%A"."%a".out
#SBATCH -e "%x"."%A"."%a".err
#SBATCH --mail-type=ALL
echo -e "$(date)\tStarting job $SLURM_JOB_ID:$SLURM_ARRAY_TASK_ID on $SLURMD_NODENAME ..."
if [ -n "$1" ]; then
rnd=$1
else
rnd=$(shuf -i 5-30 -n 1)
fi
echo "working for $rnd s ...";
yes > /dev/null &
ypid=$!
yes > /dev/null &
ypid2=$!
sleep $rnd
echo "killing job $ypid ..."
{ kill $ypid && wait $ypid; } 2>/dev/null
echo "killing job $ypid2 ..."
{ kill $ypid2 && wait $ypid2; } 2>/dev/null
echo “all done, exiting with 0”
ex=$?
echo -e "$(date)\tJob $SLURM_JOB_ID:$SLURM_ARRAY_TASK_ID ended with $ex"
exit $ex
EOF
# submit as an array job allocating 2 CPUs per job (max runtime of 1min; max 1G memory per job)
rm -f *.{err,out}; sbatch -n2 -a 1-5 --time 1 --mem=1G simple.sh
rm -f *.{err,out}; sbatch -n2 -a 1-5 --time 1 --mem=1G simple.sh 120 # these jobs should timeout
# monitor Slurm queues and job status
watch -d "sinfo -N -S '-P' -o '%8N %9P %.5T %.13C %.8O %.8e %.6m %.8d %.6w %.8f %20E'|cut -c-\$COLUMNS; echo; echo; squeue --format='%12i %10j %6u %8N %4P %4C %7m %8M %10T %16R %o'|cut -c-\$COLUMNS; echo; echo; sacct -X -a --format JobID,User,JobName,Partition,AllocCPUS,State,ExitCode,End,ElapsedRaw|tail|tac|grep -v 'JobID\|^---'|awk 'BEGIN{print \" JobID User JobName Partition AllocCPUS State ExitCode End ElapsedRaw\n------------ --------- ---------- ---------- ---------- ---------- -------- ------------------- ----------\"}{print}'|cut -c-\$COLUMNS"
Privacy policy
The complete privacy policy for the ECC is available here.
Get support
For any issues, questions, problems concerning ECC access, please use the support request form or contact ecc@elixir.biomed.unipd.it.