<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.ccn.ucla.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Acho</id>
	<title>Center for Cognitive Neuroscience - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.ccn.ucla.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Acho"/>
	<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php/Special:Contributions/Acho"/>
	<updated>2026-05-06T09:37:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3124</id>
		<title>Hoffman2:Accessing the Cluster</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3124"/>
		<updated>2016-04-18T21:45:10Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* NX Client - GUI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.&lt;br /&gt;
&lt;br /&gt;
==SSH - Command Line==&lt;br /&gt;
SSH stands for &#039;&#039;Secure Shell&#039;&#039; and is a method of remotely logging into a computer using an encrypted connection.  It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.&lt;br /&gt;
&lt;br /&gt;
===Mac/Linux/Unix===&lt;br /&gt;
====Simple SSH====&lt;br /&gt;
Use the ssh command from a terminal:&lt;br /&gt;
 ssh login_id@hoffman2.idre.ucla.edu&lt;br /&gt;
where login_id is replaced by your cluster user name.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====GUI-Enabled SSH [Recommended]====&lt;br /&gt;
Macs (post - Snow Leopard 10.6.x) no longer come with a X Window System Server pre-installed.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Before doing the following steps, please install [http://xquartz.macosforge.org/ XQuartz] and restart your computer.&#039;&#039;&#039;&lt;br /&gt;
For more information about XQuartz, read [http://support.apple.com/kb/ht5293 here].&lt;br /&gt;
***WARNING*** MacOSX10 Yosemite needs to add &amp;quot;export Display=:0.0&amp;quot; to the local user Profile in order for it to work.&lt;br /&gt;
# Open up X11/XQuartz or Terminal.  Both are under &#039;&#039;Applications &amp;gt; Utilities&#039;&#039; on Macs.&lt;br /&gt;
# Type the command&lt;br /&gt;
#: &amp;lt;pre&amp;gt;$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: filling in your Hoffman2 username.&lt;br /&gt;
#: The &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer.&lt;br /&gt;
# Press enter and type in your password when it asks for it.  No characters or asterisks will show up while you type.&lt;br /&gt;
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSH into a login node.&lt;br /&gt;
&lt;br /&gt;
===Windows===&lt;br /&gt;
# Go [http://www.hoffman2.idre.ucla.edu/access/login/ here] and follow the instructions under &#039;&#039;Windows&#039;&#039;.  We recommend [http://www.hoffman2.idre.ucla.edu/access/putty/ PuTTY] or Cgywin.&lt;br /&gt;
 (If you use putty, please install [ xming http://sourceforge.net/projects/xming/ ] for GUI access.&lt;br /&gt;
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine&lt;br /&gt;
&lt;br /&gt;
==NX Client - GUI==&lt;br /&gt;
TODO: FIX NXClient Instructions&lt;br /&gt;
&lt;br /&gt;
: &#039;&#039;The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]&#039;&#039;&lt;br /&gt;
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2.  This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).&lt;br /&gt;
&lt;br /&gt;
===Mac OS X 10.7+ / Windows / Linux===&lt;br /&gt;
==== What You Need====&lt;br /&gt;
# Go to the No Machine website ([https://www.nomachine.com/download No Machine]) and download/install the No Machine for Mac OS X/Windows/Linux.&lt;br /&gt;
# Hoffman2 NX Client Public Key&lt;br /&gt;
#* To get the NX Client Public Key, follow the steps below or email support@ccn.ucla.edu&lt;br /&gt;
#** (OSX/Linux) Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)&lt;br /&gt;
#**:&amp;lt;code&amp;gt;$ scp USERNAME@hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/&amp;lt;/code&amp;gt;&lt;br /&gt;
#** (Windows) Use a sftp program and download the file /etc/nxserver/client.id_dsa.key on Hoffman2 (hoffman2.idre.ucla.edu) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Setup====&lt;br /&gt;
# Open up No Machine in Application/Desktop and Click Continue.&lt;br /&gt;
# A window titled &amp;quot;New Connection&amp;quot; will appear.  Fill out the fields accordingly&lt;br /&gt;
#* Protocol -- SSH&lt;br /&gt;
#* Host -- &amp;quot;hoffman2.idre.ucla.edu&amp;quot;&lt;br /&gt;
#* Port -- 22&lt;br /&gt;
&lt;br /&gt;
#* Select &amp;quot;Use the NoMachine login&amp;quot;&lt;br /&gt;
#* Select Alternate Server Key and (...) - and find the file (client.id_dsa.key) you downloaded earlier (in your Documents folder).&lt;br /&gt;
#* Don&#039;t use a proxy&lt;br /&gt;
#* Name -- Something like &amp;quot;Hoffman2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Double click on the connection you just created (it should be the only one in the list).&lt;br /&gt;
# Enter your Hoffman2 username and password and click &amp;quot;OK&amp;quot; (You may also check the box labeled &amp;quot;Save this setting in the configuration file&amp;quot; to avoid retyping this in the future)&lt;br /&gt;
# Select &amp;quot;Create a new session&amp;quot;. or &amp;quot;New Virtual Desktop&amp;quot;&lt;br /&gt;
# In the next menu, select Create new &#039;&#039;&#039;GNOME&#039;&#039;&#039; virtual desktop.&lt;br /&gt;
# A virtual desktop should appear!&lt;br /&gt;
&lt;br /&gt;
Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]&lt;br /&gt;
&lt;br /&gt;
====Troubleshooting====&lt;br /&gt;
If your NX Client session freezes and you are unable to close it properly, open &#039;&#039;NX Session Administrator&#039;&#039; and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.&lt;br /&gt;
&lt;br /&gt;
For more Information [[http://hpc.ucla.edu/hoffman2/access/nx.php Hoffman2 NX Client]]&lt;br /&gt;
&lt;br /&gt;
If you are unable to open Firefox (&amp;quot;Firefox is already running, but is not responding. To open a new window, you must first close the existing Firefox process, or restart your system.&amp;quot;), deleting ~/.mozilla might fix the problem. &#039;&#039;Be warned:&#039;&#039; this will erase your profile, including bookmarks, history, saved passwords, etc! For instructions on backing up and restoring profile information, see [https://support.mozilla.org/en-US/kb/back-and-restore-information-firefox-profiles Mozilla Support]. Make sure to perform these actions within No Machine, and not on your local system.&lt;br /&gt;
&lt;br /&gt;
== Change Passwords ==&lt;br /&gt;
Once you&#039;ve logged on and made sure its works, you can change your password to something more rememberable &lt;br /&gt;
To change passwords, logon and type:&lt;br /&gt;
 passwd&lt;br /&gt;
It should ask you for your old password and then new ones. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/access/ Hoffman2 Access]&lt;br /&gt;
*[[Hoffman2:Accessing_the_Cluster-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3123</id>
		<title>Hoffman2:Accessing the Cluster</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3123"/>
		<updated>2016-04-18T21:44:43Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* External Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.&lt;br /&gt;
&lt;br /&gt;
==SSH - Command Line==&lt;br /&gt;
SSH stands for &#039;&#039;Secure Shell&#039;&#039; and is a method of remotely logging into a computer using an encrypted connection.  It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.&lt;br /&gt;
&lt;br /&gt;
===Mac/Linux/Unix===&lt;br /&gt;
====Simple SSH====&lt;br /&gt;
Use the ssh command from a terminal:&lt;br /&gt;
 ssh login_id@hoffman2.idre.ucla.edu&lt;br /&gt;
where login_id is replaced by your cluster user name.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====GUI-Enabled SSH [Recommended]====&lt;br /&gt;
Macs (post - Snow Leopard 10.6.x) no longer come with a X Window System Server pre-installed.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Before doing the following steps, please install [http://xquartz.macosforge.org/ XQuartz] and restart your computer.&#039;&#039;&#039;&lt;br /&gt;
For more information about XQuartz, read [http://support.apple.com/kb/ht5293 here].&lt;br /&gt;
***WARNING*** MacOSX10 Yosemite needs to add &amp;quot;export Display=:0.0&amp;quot; to the local user Profile in order for it to work.&lt;br /&gt;
# Open up X11/XQuartz or Terminal.  Both are under &#039;&#039;Applications &amp;gt; Utilities&#039;&#039; on Macs.&lt;br /&gt;
# Type the command&lt;br /&gt;
#: &amp;lt;pre&amp;gt;$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: filling in your Hoffman2 username.&lt;br /&gt;
#: The &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer.&lt;br /&gt;
# Press enter and type in your password when it asks for it.  No characters or asterisks will show up while you type.&lt;br /&gt;
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSH into a login node.&lt;br /&gt;
&lt;br /&gt;
===Windows===&lt;br /&gt;
# Go [http://www.hoffman2.idre.ucla.edu/access/login/ here] and follow the instructions under &#039;&#039;Windows&#039;&#039;.  We recommend [http://www.hoffman2.idre.ucla.edu/access/putty/ PuTTY] or Cgywin.&lt;br /&gt;
 (If you use putty, please install [ xming http://sourceforge.net/projects/xming/ ] for GUI access.&lt;br /&gt;
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine&lt;br /&gt;
&lt;br /&gt;
==NX Client - GUI==&lt;br /&gt;
: &#039;&#039;The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]&#039;&#039;&lt;br /&gt;
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2.  This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).&lt;br /&gt;
&lt;br /&gt;
===Mac OS X 10.7+ / Windows / Linux===&lt;br /&gt;
==== What You Need====&lt;br /&gt;
# Go to the No Machine website ([https://www.nomachine.com/download No Machine]) and download/install the No Machine for Mac OS X/Windows/Linux.&lt;br /&gt;
# Hoffman2 NX Client Public Key&lt;br /&gt;
#* To get the NX Client Public Key, follow the steps below or email support@ccn.ucla.edu&lt;br /&gt;
#** (OSX/Linux) Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)&lt;br /&gt;
#**:&amp;lt;code&amp;gt;$ scp USERNAME@hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/&amp;lt;/code&amp;gt;&lt;br /&gt;
#** (Windows) Use a sftp program and download the file /etc/nxserver/client.id_dsa.key on Hoffman2 (hoffman2.idre.ucla.edu) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Setup====&lt;br /&gt;
# Open up No Machine in Application/Desktop and Click Continue.&lt;br /&gt;
# A window titled &amp;quot;New Connection&amp;quot; will appear.  Fill out the fields accordingly&lt;br /&gt;
#* Protocol -- SSH&lt;br /&gt;
#* Host -- &amp;quot;hoffman2.idre.ucla.edu&amp;quot;&lt;br /&gt;
#* Port -- 22&lt;br /&gt;
&lt;br /&gt;
#* Select &amp;quot;Use the NoMachine login&amp;quot;&lt;br /&gt;
#* Select Alternate Server Key and (...) - and find the file (client.id_dsa.key) you downloaded earlier (in your Documents folder).&lt;br /&gt;
#* Don&#039;t use a proxy&lt;br /&gt;
#* Name -- Something like &amp;quot;Hoffman2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Double click on the connection you just created (it should be the only one in the list).&lt;br /&gt;
# Enter your Hoffman2 username and password and click &amp;quot;OK&amp;quot; (You may also check the box labeled &amp;quot;Save this setting in the configuration file&amp;quot; to avoid retyping this in the future)&lt;br /&gt;
# Select &amp;quot;Create a new session&amp;quot;. or &amp;quot;New Virtual Desktop&amp;quot;&lt;br /&gt;
# In the next menu, select Create new &#039;&#039;&#039;GNOME&#039;&#039;&#039; virtual desktop.&lt;br /&gt;
# A virtual desktop should appear!&lt;br /&gt;
&lt;br /&gt;
Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]&lt;br /&gt;
&lt;br /&gt;
====Troubleshooting====&lt;br /&gt;
If your NX Client session freezes and you are unable to close it properly, open &#039;&#039;NX Session Administrator&#039;&#039; and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.&lt;br /&gt;
&lt;br /&gt;
For more Information [[http://hpc.ucla.edu/hoffman2/access/nx.php Hoffman2 NX Client]]&lt;br /&gt;
&lt;br /&gt;
If you are unable to open Firefox (&amp;quot;Firefox is already running, but is not responding. To open a new window, you must first close the existing Firefox process, or restart your system.&amp;quot;), deleting ~/.mozilla might fix the problem. &#039;&#039;Be warned:&#039;&#039; this will erase your profile, including bookmarks, history, saved passwords, etc! For instructions on backing up and restoring profile information, see [https://support.mozilla.org/en-US/kb/back-and-restore-information-firefox-profiles Mozilla Support]. Make sure to perform these actions within No Machine, and not on your local system.&lt;br /&gt;
&lt;br /&gt;
== Change Passwords ==&lt;br /&gt;
Once you&#039;ve logged on and made sure its works, you can change your password to something more rememberable &lt;br /&gt;
To change passwords, logon and type:&lt;br /&gt;
 passwd&lt;br /&gt;
It should ask you for your old password and then new ones. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/access/ Hoffman2 Access]&lt;br /&gt;
*[[Hoffman2:Accessing_the_Cluster-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3122</id>
		<title>Hoffman2:Accessing the Cluster</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3122"/>
		<updated>2016-04-18T21:43:33Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.&lt;br /&gt;
&lt;br /&gt;
==SSH - Command Line==&lt;br /&gt;
SSH stands for &#039;&#039;Secure Shell&#039;&#039; and is a method of remotely logging into a computer using an encrypted connection.  It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.&lt;br /&gt;
&lt;br /&gt;
===Mac/Linux/Unix===&lt;br /&gt;
====Simple SSH====&lt;br /&gt;
Use the ssh command from a terminal:&lt;br /&gt;
 ssh login_id@hoffman2.idre.ucla.edu&lt;br /&gt;
where login_id is replaced by your cluster user name.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====GUI-Enabled SSH [Recommended]====&lt;br /&gt;
Macs (post - Snow Leopard 10.6.x) no longer come with a X Window System Server pre-installed.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Before doing the following steps, please install [http://xquartz.macosforge.org/ XQuartz] and restart your computer.&#039;&#039;&#039;&lt;br /&gt;
For more information about XQuartz, read [http://support.apple.com/kb/ht5293 here].&lt;br /&gt;
***WARNING*** MacOSX10 Yosemite needs to add &amp;quot;export Display=:0.0&amp;quot; to the local user Profile in order for it to work.&lt;br /&gt;
# Open up X11/XQuartz or Terminal.  Both are under &#039;&#039;Applications &amp;gt; Utilities&#039;&#039; on Macs.&lt;br /&gt;
# Type the command&lt;br /&gt;
#: &amp;lt;pre&amp;gt;$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: filling in your Hoffman2 username.&lt;br /&gt;
#: The &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer.&lt;br /&gt;
# Press enter and type in your password when it asks for it.  No characters or asterisks will show up while you type.&lt;br /&gt;
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSH into a login node.&lt;br /&gt;
&lt;br /&gt;
===Windows===&lt;br /&gt;
# Go [http://www.hoffman2.idre.ucla.edu/access/login/ here] and follow the instructions under &#039;&#039;Windows&#039;&#039;.  We recommend [http://www.hoffman2.idre.ucla.edu/access/putty/ PuTTY] or Cgywin.&lt;br /&gt;
 (If you use putty, please install [ xming http://sourceforge.net/projects/xming/ ] for GUI access.&lt;br /&gt;
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine&lt;br /&gt;
&lt;br /&gt;
==NX Client - GUI==&lt;br /&gt;
: &#039;&#039;The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]&#039;&#039;&lt;br /&gt;
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2.  This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).&lt;br /&gt;
&lt;br /&gt;
===Mac OS X 10.7+ / Windows / Linux===&lt;br /&gt;
==== What You Need====&lt;br /&gt;
# Go to the No Machine website ([https://www.nomachine.com/download No Machine]) and download/install the No Machine for Mac OS X/Windows/Linux.&lt;br /&gt;
# Hoffman2 NX Client Public Key&lt;br /&gt;
#* To get the NX Client Public Key, follow the steps below or email support@ccn.ucla.edu&lt;br /&gt;
#** (OSX/Linux) Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)&lt;br /&gt;
#**:&amp;lt;code&amp;gt;$ scp USERNAME@hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/&amp;lt;/code&amp;gt;&lt;br /&gt;
#** (Windows) Use a sftp program and download the file /etc/nxserver/client.id_dsa.key on Hoffman2 (hoffman2.idre.ucla.edu) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Setup====&lt;br /&gt;
# Open up No Machine in Application/Desktop and Click Continue.&lt;br /&gt;
# A window titled &amp;quot;New Connection&amp;quot; will appear.  Fill out the fields accordingly&lt;br /&gt;
#* Protocol -- SSH&lt;br /&gt;
#* Host -- &amp;quot;hoffman2.idre.ucla.edu&amp;quot;&lt;br /&gt;
#* Port -- 22&lt;br /&gt;
&lt;br /&gt;
#* Select &amp;quot;Use the NoMachine login&amp;quot;&lt;br /&gt;
#* Select Alternate Server Key and (...) - and find the file (client.id_dsa.key) you downloaded earlier (in your Documents folder).&lt;br /&gt;
#* Don&#039;t use a proxy&lt;br /&gt;
#* Name -- Something like &amp;quot;Hoffman2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Double click on the connection you just created (it should be the only one in the list).&lt;br /&gt;
# Enter your Hoffman2 username and password and click &amp;quot;OK&amp;quot; (You may also check the box labeled &amp;quot;Save this setting in the configuration file&amp;quot; to avoid retyping this in the future)&lt;br /&gt;
# Select &amp;quot;Create a new session&amp;quot;. or &amp;quot;New Virtual Desktop&amp;quot;&lt;br /&gt;
# In the next menu, select Create new &#039;&#039;&#039;GNOME&#039;&#039;&#039; virtual desktop.&lt;br /&gt;
# A virtual desktop should appear!&lt;br /&gt;
&lt;br /&gt;
Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]&lt;br /&gt;
&lt;br /&gt;
====Troubleshooting====&lt;br /&gt;
If your NX Client session freezes and you are unable to close it properly, open &#039;&#039;NX Session Administrator&#039;&#039; and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.&lt;br /&gt;
&lt;br /&gt;
For more Information [[http://hpc.ucla.edu/hoffman2/access/nx.php Hoffman2 NX Client]]&lt;br /&gt;
&lt;br /&gt;
If you are unable to open Firefox (&amp;quot;Firefox is already running, but is not responding. To open a new window, you must first close the existing Firefox process, or restart your system.&amp;quot;), deleting ~/.mozilla might fix the problem. &#039;&#039;Be warned:&#039;&#039; this will erase your profile, including bookmarks, history, saved passwords, etc! For instructions on backing up and restoring profile information, see [https://support.mozilla.org/en-US/kb/back-and-restore-information-firefox-profiles Mozilla Support]. Make sure to perform these actions within No Machine, and not on your local system.&lt;br /&gt;
&lt;br /&gt;
== Change Passwords ==&lt;br /&gt;
Once you&#039;ve logged on and made sure its works, you can change your password to something more rememberable &lt;br /&gt;
To change passwords, logon and type:&lt;br /&gt;
 passwd&lt;br /&gt;
It should ask you for your old password and then new ones. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/access/access.php Hoffman2 Access]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/access/nx.php Hoffman2 via NX Client]&lt;br /&gt;
*[[Hoffman2:Accessing_the_Cluster-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3121</id>
		<title>Hoffman2:Accessing the Cluster</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&amp;diff=3121"/>
		<updated>2016-04-18T21:43:18Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.&lt;br /&gt;
&lt;br /&gt;
==SSH - Command Line==&lt;br /&gt;
SSH stands for &#039;&#039;Secure Shell&#039;&#039; and is a method of remotely logging into a computer using an encrypted connection.  It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.&lt;br /&gt;
&lt;br /&gt;
===Mac/Linux/Unix===&lt;br /&gt;
====Simple SSH====&lt;br /&gt;
Use the ssh command from a terminal:&lt;br /&gt;
 ssh login_id@hoffman2.idre.ucla.edu&lt;br /&gt;
where login_id is replaced by your cluster user name.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====GUI-Enabled SSH [Recommended]====&lt;br /&gt;
Macs (post - Snow Leopard 10.6.x) no longer come with a X Window System Server pre-installed.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Before doing the following steps, please install [http://xquartz.macosforge.org/ XQuartz] and restart your computer.&#039;&#039;&#039;&lt;br /&gt;
For more information about XQuartz, read [http://support.apple.com/kb/ht5293 here].&lt;br /&gt;
***WARNING*** MacOSX10 Yosemite needs to add &amp;quot;export Display=:0.0&amp;quot; to the local user Profile in order for it to work.&lt;br /&gt;
# Open up X11/XQuartz or Terminal.  Both are under &#039;&#039;Applications &amp;gt; Utilities&#039;&#039; on Macs.&lt;br /&gt;
# Type the command&lt;br /&gt;
#: &amp;lt;pre&amp;gt;$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: filling in your Hoffman2 username.&lt;br /&gt;
#: The &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer.&lt;br /&gt;
# Press enter and type in your password when it asks for it.  No characters or asterisks will show up while you type.&lt;br /&gt;
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSH into a login node.&lt;br /&gt;
&lt;br /&gt;
===Windows===&lt;br /&gt;
# Go [http://www.hoffman2.idre.ucla.edu/access/login/ here] and follow the instructions under &#039;&#039;Windows&#039;&#039;.  We recommend [http://www.hoffman2.idre.ucla.edu/access/putty/ PuTTY] or Cgywin.&lt;br /&gt;
 (If you use putty, please install [ xming http://sourceforge.net/projects/xming/ ] for GUI access.&lt;br /&gt;
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine&lt;br /&gt;
# TODO: [More information Coming]&lt;br /&gt;
&lt;br /&gt;
==NX Client - GUI==&lt;br /&gt;
: &#039;&#039;The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]&#039;&#039;&lt;br /&gt;
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2.  This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).&lt;br /&gt;
&lt;br /&gt;
===Mac OS X 10.7+ / Windows / Linux===&lt;br /&gt;
==== What You Need====&lt;br /&gt;
# Go to the No Machine website ([https://www.nomachine.com/download No Machine]) and download/install the No Machine for Mac OS X/Windows/Linux.&lt;br /&gt;
# Hoffman2 NX Client Public Key&lt;br /&gt;
#* To get the NX Client Public Key, follow the steps below or email support@ccn.ucla.edu&lt;br /&gt;
#** (OSX/Linux) Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)&lt;br /&gt;
#**:&amp;lt;code&amp;gt;$ scp USERNAME@hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/&amp;lt;/code&amp;gt;&lt;br /&gt;
#** (Windows) Use a sftp program and download the file /etc/nxserver/client.id_dsa.key on Hoffman2 (hoffman2.idre.ucla.edu) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Setup====&lt;br /&gt;
# Open up No Machine in Application/Desktop and Click Continue.&lt;br /&gt;
# A window titled &amp;quot;New Connection&amp;quot; will appear.  Fill out the fields accordingly&lt;br /&gt;
#* Protocol -- SSH&lt;br /&gt;
#* Host -- &amp;quot;hoffman2.idre.ucla.edu&amp;quot;&lt;br /&gt;
#* Port -- 22&lt;br /&gt;
&lt;br /&gt;
#* Select &amp;quot;Use the NoMachine login&amp;quot;&lt;br /&gt;
#* Select Alternate Server Key and (...) - and find the file (client.id_dsa.key) you downloaded earlier (in your Documents folder).&lt;br /&gt;
#* Don&#039;t use a proxy&lt;br /&gt;
#* Name -- Something like &amp;quot;Hoffman2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Double click on the connection you just created (it should be the only one in the list).&lt;br /&gt;
# Enter your Hoffman2 username and password and click &amp;quot;OK&amp;quot; (You may also check the box labeled &amp;quot;Save this setting in the configuration file&amp;quot; to avoid retyping this in the future)&lt;br /&gt;
# Select &amp;quot;Create a new session&amp;quot;. or &amp;quot;New Virtual Desktop&amp;quot;&lt;br /&gt;
# In the next menu, select Create new &#039;&#039;&#039;GNOME&#039;&#039;&#039; virtual desktop.&lt;br /&gt;
# A virtual desktop should appear!&lt;br /&gt;
&lt;br /&gt;
Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]&lt;br /&gt;
&lt;br /&gt;
====Troubleshooting====&lt;br /&gt;
If your NX Client session freezes and you are unable to close it properly, open &#039;&#039;NX Session Administrator&#039;&#039; and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.&lt;br /&gt;
&lt;br /&gt;
For more Information [[http://hpc.ucla.edu/hoffman2/access/nx.php Hoffman2 NX Client]]&lt;br /&gt;
&lt;br /&gt;
If you are unable to open Firefox (&amp;quot;Firefox is already running, but is not responding. To open a new window, you must first close the existing Firefox process, or restart your system.&amp;quot;), deleting ~/.mozilla might fix the problem. &#039;&#039;Be warned:&#039;&#039; this will erase your profile, including bookmarks, history, saved passwords, etc! For instructions on backing up and restoring profile information, see [https://support.mozilla.org/en-US/kb/back-and-restore-information-firefox-profiles Mozilla Support]. Make sure to perform these actions within No Machine, and not on your local system.&lt;br /&gt;
&lt;br /&gt;
== Change Passwords ==&lt;br /&gt;
Once you&#039;ve logged on and made sure its works, you can change your password to something more rememberable &lt;br /&gt;
To change passwords, logon and type:&lt;br /&gt;
 passwd&lt;br /&gt;
It should ask you for your old password and then new ones. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/access/access.php Hoffman2 Access]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/access/nx.php Hoffman2 via NX Client]&lt;br /&gt;
*[[Hoffman2:Accessing_the_Cluster-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3120</id>
		<title>Hoffman2:Getting an Account</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3120"/>
		<updated>2016-04-18T21:41:26Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* External Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==Requesting Hoffman2 Account==&lt;br /&gt;
===What You Need===&lt;br /&gt;
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [https://logon.ucla.edu UCLA Logon] services page. Click on &amp;quot;Create UCLA Logon ID&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Applying for the Account===&lt;br /&gt;
 ATTENTION: If you are a PI interested in Hoffman2, please see the section&lt;br /&gt;
 [[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]] below.&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;New User Registration&amp;quot;&lt;br /&gt;
#Authenticate using your UCLA BOL credentials&lt;br /&gt;
#Fill out the form with appropriate information. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer&#039;s lab), or your respective PI if they are a Faculty Sponsor on Hoffman.&lt;br /&gt;
;Proposed UserName&lt;br /&gt;
:This will be the username you use to sign into the cluster with.&lt;br /&gt;
;Select a Resource&lt;br /&gt;
:For the Mark Cohen/Susan Bookheimer labs, choose &amp;quot;Hoffman2&amp;quot;. However, you can request access to any cluster that is a member of the Grid Portal. &lt;br /&gt;
&lt;br /&gt;
Click Submit.&lt;br /&gt;
You will receive an email with a link to a temporary password. &#039;&#039;&#039;PLEASE WRITE IT DOWN.&#039;&#039;&#039; The link expires after 72 hours. If you missed the link or it expired, go back to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Application Page] and click Forgot Your Cluster Password? It will take about a day for the cluster to resend you a new password. &lt;br /&gt;
&lt;br /&gt;
You can change your password once you&#039;ve logged in by using [[Hoffman2:Accessing_the_Cluster#Change_Passwords | passwd.]]&lt;br /&gt;
&lt;br /&gt;
==Becoming A Faculty Sponsor==&lt;br /&gt;
If you are a PI or Lab Manager interested in the Hoffman2 Cluster, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.&lt;br /&gt;
&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;Request to become faculty sponsor&amp;quot; (On the Bottom)&lt;br /&gt;
#Fill out the form with appropriate information.&lt;br /&gt;
&lt;br /&gt;
Under &#039;Reason&#039;, about any generic reason is appropriate for faculty members. For example, &amp;quot;To perform fMRI analysis.&amp;quot; will likely suffice.&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&amp;amp;JavaScript=enabled UCLA Grid Portal]&lt;br /&gt;
*[https://logon.ucla.edu UCLA BOL Home Page]&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/getting-started/ Hoffman2 Account Page]&lt;br /&gt;
*[[Hoffman2:Getting_an_Account-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3119</id>
		<title>Hoffman2:Getting an Account</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3119"/>
		<updated>2016-04-18T21:40:57Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Becoming A Faculty Sponsor */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==Requesting Hoffman2 Account==&lt;br /&gt;
===What You Need===&lt;br /&gt;
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [https://logon.ucla.edu UCLA Logon] services page. Click on &amp;quot;Create UCLA Logon ID&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Applying for the Account===&lt;br /&gt;
 ATTENTION: If you are a PI interested in Hoffman2, please see the section&lt;br /&gt;
 [[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]] below.&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;New User Registration&amp;quot;&lt;br /&gt;
#Authenticate using your UCLA BOL credentials&lt;br /&gt;
#Fill out the form with appropriate information. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer&#039;s lab), or your respective PI if they are a Faculty Sponsor on Hoffman.&lt;br /&gt;
;Proposed UserName&lt;br /&gt;
:This will be the username you use to sign into the cluster with.&lt;br /&gt;
;Select a Resource&lt;br /&gt;
:For the Mark Cohen/Susan Bookheimer labs, choose &amp;quot;Hoffman2&amp;quot;. However, you can request access to any cluster that is a member of the Grid Portal. &lt;br /&gt;
&lt;br /&gt;
Click Submit.&lt;br /&gt;
You will receive an email with a link to a temporary password. &#039;&#039;&#039;PLEASE WRITE IT DOWN.&#039;&#039;&#039; The link expires after 72 hours. If you missed the link or it expired, go back to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Application Page] and click Forgot Your Cluster Password? It will take about a day for the cluster to resend you a new password. &lt;br /&gt;
&lt;br /&gt;
You can change your password once you&#039;ve logged in by using [[Hoffman2:Accessing_the_Cluster#Change_Passwords | passwd.]]&lt;br /&gt;
&lt;br /&gt;
==Becoming A Faculty Sponsor==&lt;br /&gt;
If you are a PI or Lab Manager interested in the Hoffman2 Cluster, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.&lt;br /&gt;
&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;Request to become faculty sponsor&amp;quot; (On the Bottom)&lt;br /&gt;
#Fill out the form with appropriate information.&lt;br /&gt;
&lt;br /&gt;
Under &#039;Reason&#039;, about any generic reason is appropriate for faculty members. For example, &amp;quot;To perform fMRI analysis.&amp;quot; will likely suffice.&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&amp;amp;JavaScript=enabled UCLA Grid Portal]&lt;br /&gt;
*[https://logon.ucla.edu UCLA BOL Home Page]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Hoffman2 Account Page]&lt;br /&gt;
*[[Hoffman2:Getting_an_Account-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3118</id>
		<title>Hoffman2:Getting an Account</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3118"/>
		<updated>2016-04-18T21:40:03Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Applying for the Account */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==Requesting Hoffman2 Account==&lt;br /&gt;
===What You Need===&lt;br /&gt;
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [https://logon.ucla.edu UCLA Logon] services page. Click on &amp;quot;Create UCLA Logon ID&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Applying for the Account===&lt;br /&gt;
 ATTENTION: If you are a PI interested in Hoffman2, please see the section&lt;br /&gt;
 [[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]] below.&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;New User Registration&amp;quot;&lt;br /&gt;
#Authenticate using your UCLA BOL credentials&lt;br /&gt;
#Fill out the form with appropriate information. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer&#039;s lab), or your respective PI if they are a Faculty Sponsor on Hoffman.&lt;br /&gt;
;Proposed UserName&lt;br /&gt;
:This will be the username you use to sign into the cluster with.&lt;br /&gt;
;Select a Resource&lt;br /&gt;
:For the Mark Cohen/Susan Bookheimer labs, choose &amp;quot;Hoffman2&amp;quot;. However, you can request access to any cluster that is a member of the Grid Portal. &lt;br /&gt;
&lt;br /&gt;
Click Submit.&lt;br /&gt;
You will receive an email with a link to a temporary password. &#039;&#039;&#039;PLEASE WRITE IT DOWN.&#039;&#039;&#039; The link expires after 72 hours. If you missed the link or it expired, go back to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Application Page] and click Forgot Your Cluster Password? It will take about a day for the cluster to resend you a new password. &lt;br /&gt;
&lt;br /&gt;
You can change your password once you&#039;ve logged in by using [[Hoffman2:Accessing_the_Cluster#Change_Passwords | passwd.]]&lt;br /&gt;
&lt;br /&gt;
==Becoming A Faculty Sponsor==&lt;br /&gt;
If you are a PI or Lab Manager interested in the Hoffman2 Cluster, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.&lt;br /&gt;
&lt;br /&gt;
#Navigate to the [http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;Request to become faculty sponsor&amp;quot; (On the Bottom)&lt;br /&gt;
#Fill out the form with appropriate information.&lt;br /&gt;
&lt;br /&gt;
Under &#039;Reason&#039;, about any generic reason is appropriate for faculty members. For example, &amp;quot;To perform fMRI analysis.&amp;quot; will likely suffice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&amp;amp;JavaScript=enabled UCLA Grid Portal]&lt;br /&gt;
*[https://logon.ucla.edu UCLA BOL Home Page]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Hoffman2 Account Page]&lt;br /&gt;
*[[Hoffman2:Getting_an_Account-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3117</id>
		<title>Hoffman2:Getting an Account</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=3117"/>
		<updated>2016-04-18T21:39:34Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Applying for the Account */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==Requesting Hoffman2 Account==&lt;br /&gt;
===What You Need===&lt;br /&gt;
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [https://logon.ucla.edu UCLA Logon] services page. Click on &amp;quot;Create UCLA Logon ID&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Applying for the Account===&lt;br /&gt;
 ATTENTION: If you are a member of another lab or are a PI interested, please see the section&lt;br /&gt;
 [[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]] below.&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;New User Registration&amp;quot;&lt;br /&gt;
#Authenticate using your UCLA BOL credentials&lt;br /&gt;
#Fill out the form with appropriate information. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer&#039;s lab), or your respective PI if they are a Faculty Sponsor on Hoffman.&lt;br /&gt;
;Proposed UserName&lt;br /&gt;
:This will be the username you use to sign into the cluster with.&lt;br /&gt;
;Select a Resource&lt;br /&gt;
:For the Mark Cohen/Susan Bookheimer labs, choose &amp;quot;Hoffman2&amp;quot;. However, you can request access to any cluster that is a member of the Grid Portal. &lt;br /&gt;
&lt;br /&gt;
Click Submit.&lt;br /&gt;
You will receive an email with a link to a temporary password. &#039;&#039;&#039;PLEASE WRITE IT DOWN.&#039;&#039;&#039; The link expires after 72 hours. If you missed the link or it expired, go back to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Application Page] and click Forgot Your Cluster Password? It will take about a day for the cluster to resend you a new password. &lt;br /&gt;
&lt;br /&gt;
You can change your password once you&#039;ve logged in by using [[Hoffman2:Accessing_the_Cluster#Change_Passwords | passwd.]]&lt;br /&gt;
&lt;br /&gt;
==Becoming A Faculty Sponsor==&lt;br /&gt;
If you are a PI or Lab Manager interested in the Hoffman2 Cluster, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.&lt;br /&gt;
&lt;br /&gt;
#Navigate to the [http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;Request to become faculty sponsor&amp;quot; (On the Bottom)&lt;br /&gt;
#Fill out the form with appropriate information.&lt;br /&gt;
&lt;br /&gt;
Under &#039;Reason&#039;, about any generic reason is appropriate for faculty members. For example, &amp;quot;To perform fMRI analysis.&amp;quot; will likely suffice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&amp;amp;JavaScript=enabled UCLA Grid Portal]&lt;br /&gt;
*[https://logon.ucla.edu UCLA BOL Home Page]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Hoffman2 Account Page]&lt;br /&gt;
*[[Hoffman2:Getting_an_Account-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction-Historical_Notes&amp;diff=3116</id>
		<title>Hoffman2:Introduction-Historical Notes</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction-Historical_Notes&amp;diff=3116"/>
		<updated>2016-04-18T21:38:05Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Hoffman2 Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Introduction | Back to Hoffman2:Introduction]]&lt;br /&gt;
====Historical Notes====&lt;br /&gt;
===== Hoffman2 Storage =====&lt;br /&gt;
======April 2016======&lt;br /&gt;
Express Queue removed from Information (No longer in use by Hoffman)&lt;br /&gt;
&lt;br /&gt;
======June 2013======&lt;br /&gt;
: &#039;&#039;Before July 2013, for users that were part of groups that purchased storage, their home directories were the same as their personal group directories.  e.g.&#039;&#039;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/j/jbruin&amp;lt;/code&amp;gt;&lt;br /&gt;
:&#039;&#039;did not exist, but&#039;&#039;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/mscohen/jbruin&amp;lt;/code&amp;gt;&lt;br /&gt;
:&#039;&#039;did exist and was the home directory (and personal group directory) for the user jbruin.  IDRE changed this behavior after the Summer Maintenance restart in 2013 to better separate users from their groups.  This separation more cleanly allows users to be part of multiple storage groups (e.g. belonging to sbook and mscohen groups), or switch between single groups over time, while retaining their own personal space on the cluster. A symlink named &#039;&#039;&#039;project&#039;&#039;&#039; was placed in the new home directories pointing to the old home directories. e.g.&#039;&#039;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/j/jbruin/project -&amp;gt; /u/home/mscohen/jbruin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
======June 2011======&lt;br /&gt;
: &#039;&#039;Before July 2011, there was a symlink pointing from /u/home9 to /u/home as a legacy support mechanism.  This symlink was finally removed after the Summer Maintenance of 2011 and some adjustments had to be made by anyone still using home9 references.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CHANGES MAKE ME GO LIKE THIS!&#039;&#039;&#039;&lt;br /&gt;
[[File:Hoffman2-Pro_Status.jpg|none|none|none|none|500px]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3115</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3115"/>
		<updated>2016-04-18T21:36:10Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* External Links / Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://www.hoffman2.idre.ucla.edu/computing/gpuq/ here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://www.hoffman2.idre.ucla.edu/computing/policies/ up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://www.hoffman2.idre.ucla.edu/data-storage/ click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative backups.&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://www.hoffman2.idre.ucla.edu/data-storage/ here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://www.hoffman2.idre.ucla.edu/computing/ here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/ Hoffman2 Webpage]&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/usage-status/ Hoffman2 Statistics]&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/data-storage Hoffman2 Data Storage]&lt;br /&gt;
*[http://www.hoffman2.idre.ucla.edu/computing Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3114</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3114"/>
		<updated>2016-04-18T21:34:04Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://www.hoffman2.idre.ucla.edu/computing/gpuq/ here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://www.hoffman2.idre.ucla.edu/computing/policies/ up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://www.hoffman2.idre.ucla.edu/data-storage/ click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative backups.&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://www.hoffman2.idre.ucla.edu/data-storage/ here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://www.hoffman2.idre.ucla.edu/computing/ here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3113</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3113"/>
		<updated>2016-04-18T21:33:23Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://www.hoffman2.idre.ucla.edu/computing/gpuq/ here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://www.hoffman2.idre.ucla.edu/computing/policies/ up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://www.hoffman2.idre.ucla.edu/data-storage/ click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative backups.&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://www.hoffman2.idre.ucla.edu/data-storage/ here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3112</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3112"/>
		<updated>2016-04-18T21:33:04Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Temporary Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://www.hoffman2.idre.ucla.edu/computing/gpuq/ here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://www.hoffman2.idre.ucla.edu/computing/policies/ up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://www.hoffman2.idre.ucla.edu/data-storage/ click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative backups.&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://www.hoffman2.idre.ucla.edu/data-storage/ here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3111</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3111"/>
		<updated>2016-04-18T21:26:22Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Storage Space */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://www.hoffman2.idre.ucla.edu/computing/gpuq/ here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://www.hoffman2.idre.ucla.edu/computing/policies/ up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://www.hoffman2.idre.ucla.edu/data-storage/ click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative backups.&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3110</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3110"/>
		<updated>2016-04-18T21:16:52Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Computing Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://www.hoffman2.idre.ucla.edu/computing/gpuq/ here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://www.hoffman2.idre.ucla.edu/computing/policies/ up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3109</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3109"/>
		<updated>2016-04-18T21:15:58Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Computing Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3108</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3108"/>
		<updated>2016-04-18T21:10:03Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* What is Hoffman2? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/ here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php  Stats.] The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3107</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3107"/>
		<updated>2016-04-18T21:09:27Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* What is Hoffman2? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://www.hoffman2.idre.ucla.edu/usage-status/ here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php  Stats.] The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3106</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3106"/>
		<updated>2016-04-18T21:09:00Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* What is Hoffman2? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://www.hoffman2.idre.ucla.edu/].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php  Stats.] The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3105</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=3105"/>
		<updated>2016-04-18T21:07:19Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* What is Hoffman2? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [https://idre.ucla.edu/hoffman2].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php  Stats.] The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=FNIRS&amp;diff=2957</id>
		<title>FNIRS</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=FNIRS&amp;diff=2957"/>
		<updated>2015-11-18T19:28:33Z</updated>

		<summary type="html">&lt;p&gt;Acho: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2955</id>
		<title>Encryption Policy</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2955"/>
		<updated>2015-10-15T22:41:23Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Macs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Encryption Policy ==&lt;br /&gt;
Outlined here:&lt;br /&gt;
&lt;br /&gt;
 • All faculty, fellows, residents, students, volunteers and staff&lt;br /&gt;
 • All mobile devices – laptops; tablets, mobile phones&lt;br /&gt;
 • All removable media -- external hard drives; USB flash drives&lt;br /&gt;
 • Non-University-owned devices&lt;br /&gt;
 • Devices issued by UCLA Health or DGSOM&lt;br /&gt;
 • All devices WHETHER OR NOT they are used to access restricted information&lt;br /&gt;
 • Any mobile device used for any University business&lt;br /&gt;
&lt;br /&gt;
For more information, go [http://employee.uclahealth.org/device-security/ Here]&lt;br /&gt;
&lt;br /&gt;
== Devices ==&lt;br /&gt;
=== Laptops ===&lt;br /&gt;
====Windows====&lt;br /&gt;
Recommend using &#039;&#039;&#039;Bitlocker Drive Encryption&#039;&#039;&#039; &lt;br /&gt;
(Built into Windows Vista and later+)&lt;br /&gt;
&lt;br /&gt;
[http://www.pcworld.com/article/2308725/a-beginners-guide-to-bitlocker-windows-built-in-encryption-tool.html Instructions Here]. (It is very important to save the backup-code/key at the end!)&lt;br /&gt;
&lt;br /&gt;
It is very important to save the backup-code/key at the end!&lt;br /&gt;
&lt;br /&gt;
====Macs ====&lt;br /&gt;
Recommend using &#039;&#039;&#039;File Vault Encryption&#039;&#039;&#039; (Built into Mac OSX)&lt;br /&gt;
&lt;br /&gt;
[https://support.apple.com/en-us/HT204837 Instructions Here] (It is very important to save the backup-code/key at the end!)&lt;br /&gt;
&lt;br /&gt;
====Others====&lt;br /&gt;
Other encryption programs are suggested [http://employee.uclahealth.org/device-security-toolkit/ here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USB/External Device ===&lt;br /&gt;
&#039;&#039;Highly recommend going to Encryption Fair or your local IT for encryption&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.tomsguide.com/faq/id-2318737/encrypt-usb-flash-drive.html Windows Instructions]&lt;br /&gt;
&lt;br /&gt;
[http://www.theinstructional.com/guides/encrypt-an-external-disk-or-usb-stick-with-a-password Mac Instructions]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Mobile Device ===&lt;br /&gt;
Mobile phone need to have AirWatch installed, if eligible.&lt;br /&gt;
&lt;br /&gt;
To check if you are eligible: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;  white-space: -moz-pre-wrap;  white-space: -pre-wrap; white-space: -o-pre-wrap;  word-wrap: break-word;&amp;quot;&amp;gt;If the user is non-exempt (e.g. hourly pay status/bi-weekly pay), then they are not eligible for Airwatch unless approved by HR. &lt;br /&gt;
&lt;br /&gt;
If the user is exempt (paid monthly), then they are eligible and should go to the encryption fair starting on Monday for help on downloading and installing Airwatch.  Users also MUST have an AD account. &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Pin/Pattern Code Screen Lock is not enough (though highly recommended!)&lt;br /&gt;
Please Install AirWatch at the [http://employee.uclahealth.org/encryption-fairs/ Encryption Fairs]&lt;br /&gt;
&lt;br /&gt;
== Encryption Fair ==&lt;br /&gt;
Fore more information and dates : [http://employee.uclahealth.org/encryption-fairs/ Go Here]/&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2954</id>
		<title>Encryption Policy</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2954"/>
		<updated>2015-10-15T22:41:07Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Encryption Policy ==&lt;br /&gt;
Outlined here:&lt;br /&gt;
&lt;br /&gt;
 • All faculty, fellows, residents, students, volunteers and staff&lt;br /&gt;
 • All mobile devices – laptops; tablets, mobile phones&lt;br /&gt;
 • All removable media -- external hard drives; USB flash drives&lt;br /&gt;
 • Non-University-owned devices&lt;br /&gt;
 • Devices issued by UCLA Health or DGSOM&lt;br /&gt;
 • All devices WHETHER OR NOT they are used to access restricted information&lt;br /&gt;
 • Any mobile device used for any University business&lt;br /&gt;
&lt;br /&gt;
For more information, go [http://employee.uclahealth.org/device-security/ Here]&lt;br /&gt;
&lt;br /&gt;
== Devices ==&lt;br /&gt;
=== Laptops ===&lt;br /&gt;
====Windows====&lt;br /&gt;
Recommend using &#039;&#039;&#039;Bitlocker Drive Encryption&#039;&#039;&#039; &lt;br /&gt;
(Built into Windows Vista and later+)&lt;br /&gt;
&lt;br /&gt;
[http://www.pcworld.com/article/2308725/a-beginners-guide-to-bitlocker-windows-built-in-encryption-tool.html Instructions Here]. (It is very important to save the backup-code/key at the end!)&lt;br /&gt;
&lt;br /&gt;
It is very important to save the backup-code/key at the end!&lt;br /&gt;
&lt;br /&gt;
====Macs ====&lt;br /&gt;
Recommend using &#039;&#039;&#039;File Vault Encryption&#039;&#039;&#039; (Built into Mac OSX)&lt;br /&gt;
&lt;br /&gt;
[https://support.apple.com/en-us/HT204837 Instructions Here]&lt;br /&gt;
&lt;br /&gt;
====Others====&lt;br /&gt;
Other encryption programs are suggested [http://employee.uclahealth.org/device-security-toolkit/ here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USB/External Device ===&lt;br /&gt;
&#039;&#039;Highly recommend going to Encryption Fair or your local IT for encryption&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.tomsguide.com/faq/id-2318737/encrypt-usb-flash-drive.html Windows Instructions]&lt;br /&gt;
&lt;br /&gt;
[http://www.theinstructional.com/guides/encrypt-an-external-disk-or-usb-stick-with-a-password Mac Instructions]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Mobile Device ===&lt;br /&gt;
Mobile phone need to have AirWatch installed, if eligible.&lt;br /&gt;
&lt;br /&gt;
To check if you are eligible: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;  white-space: -moz-pre-wrap;  white-space: -pre-wrap; white-space: -o-pre-wrap;  word-wrap: break-word;&amp;quot;&amp;gt;If the user is non-exempt (e.g. hourly pay status/bi-weekly pay), then they are not eligible for Airwatch unless approved by HR. &lt;br /&gt;
&lt;br /&gt;
If the user is exempt (paid monthly), then they are eligible and should go to the encryption fair starting on Monday for help on downloading and installing Airwatch.  Users also MUST have an AD account. &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Pin/Pattern Code Screen Lock is not enough (though highly recommended!)&lt;br /&gt;
Please Install AirWatch at the [http://employee.uclahealth.org/encryption-fairs/ Encryption Fairs]&lt;br /&gt;
&lt;br /&gt;
== Encryption Fair ==&lt;br /&gt;
Fore more information and dates : [http://employee.uclahealth.org/encryption-fairs/ Go Here]/&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2953</id>
		<title>Encryption Policy</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2953"/>
		<updated>2015-10-15T22:40:48Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Encryption Policy ==&lt;br /&gt;
Outlined here:&lt;br /&gt;
&lt;br /&gt;
 • All faculty, fellows, residents, students, volunteers and staff&lt;br /&gt;
 • All mobile devices – laptops; tablets, mobile phones&lt;br /&gt;
 • All removable media -- external hard drives; USB flash drives&lt;br /&gt;
 • Non-University-owned devices&lt;br /&gt;
 • Devices issued by UCLA Health or DGSOM&lt;br /&gt;
 • All devices WHETHER OR NOT they are used to access restricted information&lt;br /&gt;
 • Any mobile device used for any University business&lt;br /&gt;
&lt;br /&gt;
For more information, go [http://employee.uclahealth.org/device-security/ Here]&lt;br /&gt;
&lt;br /&gt;
== Devices ==&lt;br /&gt;
=== Laptops ===&lt;br /&gt;
====Windows====&lt;br /&gt;
Recommend using &#039;&#039;&#039;Bitlocker Drive Encryption&#039;&#039;&#039; &lt;br /&gt;
(Built into Windows Vista and later+)&lt;br /&gt;
&lt;br /&gt;
[http://www.pcworld.com/article/2308725/a-beginners-guide-to-bitlocker-windows-built-in-encryption-tool.html Instructions Here]&lt;br /&gt;
&lt;br /&gt;
It is very important to save the backup-code/key at the end!&lt;br /&gt;
&lt;br /&gt;
====Macs ====&lt;br /&gt;
Recommend using &#039;&#039;&#039;File Vault Encryption&#039;&#039;&#039; (Built into Mac OSX)&lt;br /&gt;
&lt;br /&gt;
[https://support.apple.com/en-us/HT204837 Instructions Here]&lt;br /&gt;
&lt;br /&gt;
====Others====&lt;br /&gt;
Other encryption programs are suggested [http://employee.uclahealth.org/device-security-toolkit/ here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USB/External Device ===&lt;br /&gt;
&#039;&#039;Highly recommend going to Encryption Fair or your local IT for encryption&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.tomsguide.com/faq/id-2318737/encrypt-usb-flash-drive.html Windows Instructions]&lt;br /&gt;
&lt;br /&gt;
[http://www.theinstructional.com/guides/encrypt-an-external-disk-or-usb-stick-with-a-password Mac Instructions]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Mobile Device ===&lt;br /&gt;
Mobile phone need to have AirWatch installed, if eligible.&lt;br /&gt;
&lt;br /&gt;
To check if you are eligible: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;  white-space: -moz-pre-wrap;  white-space: -pre-wrap; white-space: -o-pre-wrap;  word-wrap: break-word;&amp;quot;&amp;gt;If the user is non-exempt (e.g. hourly pay status/bi-weekly pay), then they are not eligible for Airwatch unless approved by HR. &lt;br /&gt;
&lt;br /&gt;
If the user is exempt (paid monthly), then they are eligible and should go to the encryption fair starting on Monday for help on downloading and installing Airwatch.  Users also MUST have an AD account. &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Pin/Pattern Code Screen Lock is not enough (though highly recommended!)&lt;br /&gt;
Please Install AirWatch at the [http://employee.uclahealth.org/encryption-fairs/ Encryption Fairs]&lt;br /&gt;
&lt;br /&gt;
== Encryption Fair ==&lt;br /&gt;
Fore more information and dates : [http://employee.uclahealth.org/encryption-fairs/ Go Here]/&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2952</id>
		<title>Encryption Policy</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Encryption_Policy&amp;diff=2952"/>
		<updated>2015-10-15T22:39:43Z</updated>

		<summary type="html">&lt;p&gt;Acho: Created page with &amp;quot;== Encryption Policy == Outlined here:   • All faculty, fellows, residents, students, volunteers and staff  • All mobile devices – laptops; tablets, mobile phones  • A...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Encryption Policy ==&lt;br /&gt;
Outlined here:&lt;br /&gt;
&lt;br /&gt;
 • All faculty, fellows, residents, students, volunteers and staff&lt;br /&gt;
 • All mobile devices – laptops; tablets, mobile phones&lt;br /&gt;
 • All removable media -- external hard drives; USB flash drives&lt;br /&gt;
 • Non-University-owned devices&lt;br /&gt;
 • Devices issued by UCLA Health or DGSOM&lt;br /&gt;
 • All devices WHETHER OR NOT they are used to access restricted information&lt;br /&gt;
 • Any mobile device used for any University business&lt;br /&gt;
&lt;br /&gt;
For more information, go [http://employee.uclahealth.org/device-security/ Here]&lt;br /&gt;
&lt;br /&gt;
== Devices ==&lt;br /&gt;
=== Laptops ===&lt;br /&gt;
====Windows====&lt;br /&gt;
Recommend using &#039;&#039;&#039;Bitlocker Drive Encryption&#039;&#039;&#039; &lt;br /&gt;
(Built into Windows Vista and later+)&lt;br /&gt;
&lt;br /&gt;
[http://www.pcworld.com/article/2308725/a-beginners-guide-to-bitlocker-windows-built-in-encryption-tool.html Instructions Here]&lt;br /&gt;
&lt;br /&gt;
====Macs ====&lt;br /&gt;
Recommend using &#039;&#039;&#039;File Vault Encryption&#039;&#039;&#039; (Built into Mac OSX)&lt;br /&gt;
&lt;br /&gt;
[https://support.apple.com/en-us/HT204837 Instructions Here]&lt;br /&gt;
&lt;br /&gt;
====Others====&lt;br /&gt;
Other encryption programs are suggested [http://employee.uclahealth.org/device-security-toolkit/ here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===USB/External Device ===&lt;br /&gt;
&#039;&#039;Highly recommend going to Encryption Fair or your local IT for encryption&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.tomsguide.com/faq/id-2318737/encrypt-usb-flash-drive.html Windows Instructions]&lt;br /&gt;
&lt;br /&gt;
[http://www.theinstructional.com/guides/encrypt-an-external-disk-or-usb-stick-with-a-password Mac Instructions]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Mobile Device ===&lt;br /&gt;
Mobile phone need to have AirWatch installed, if eligible.&lt;br /&gt;
&lt;br /&gt;
To check if you are eligible: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;white-space: pre-wrap;  white-space: -moz-pre-wrap;  white-space: -pre-wrap; white-space: -o-pre-wrap;  word-wrap: break-word;&amp;quot;&amp;gt;If the user is non-exempt (e.g. hourly pay status/bi-weekly pay), then they are not eligible for Airwatch unless approved by HR. &lt;br /&gt;
&lt;br /&gt;
If the user is exempt (paid monthly), then they are eligible and should go to the encryption fair starting on Monday for help on downloading and installing Airwatch.  Users also MUST have an AD account. &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Pin/Pattern Code Screen Lock is not enough (though highly recommended!)&lt;br /&gt;
Please Install AirWatch at the [http://employee.uclahealth.org/encryption-fairs/ Encryption Fairs]&lt;br /&gt;
&lt;br /&gt;
== Encryption Fair ==&lt;br /&gt;
Fore more information and dates : [http://employee.uclahealth.org/encryption-fairs/ Go Here]/&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=2927</id>
		<title>Hoffman2:Getting an Account</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=2927"/>
		<updated>2015-09-28T16:48:06Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Applying for the Account */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==Requesting Hoffman2 Account==&lt;br /&gt;
===What You Need===&lt;br /&gt;
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [https://logon.ucla.edu UCLA Logon] services page. Click on &amp;quot;Create UCLA Logon ID&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Applying for the Account===&lt;br /&gt;
 ATTENTION: If you are a member of another lab or are a PI interested in obtaining Hoffman access, please see the section&lt;br /&gt;
 [[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]]&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;New User Registration&amp;quot;&lt;br /&gt;
#Authenticate using your UCLA BOL credentials&lt;br /&gt;
#Fill out the form with appropriate information. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer&#039;s lab), or your respective PI if they are a Faculty Sponsor on Hoffman.&lt;br /&gt;
;Proposed UserName&lt;br /&gt;
:This will be the username you use to sign into the cluster with.&lt;br /&gt;
;Select a Resource&lt;br /&gt;
:For the Mark Cohen/Susan Bookheimer labs, choose &amp;quot;Hoffman2&amp;quot;. However, you can request access to any cluster that is a member of the Grid Portal. &lt;br /&gt;
&lt;br /&gt;
Click Submit.&lt;br /&gt;
You will receive an email with a link to a temporary password. &#039;&#039;&#039;PLEASE WRITE IT DOWN.&#039;&#039;&#039; The link expires after 72 hours. If you missed the link or it expired, go back to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Application Page] and click Forgot Your Cluster Password? It will take about a day for the cluster to resend you a new password. &lt;br /&gt;
&lt;br /&gt;
You can change your password once you&#039;ve logged in by using [[Hoffman2:Accessing_the_Cluster#Change_Passwords | passwd.]]&lt;br /&gt;
&lt;br /&gt;
==Becoming A Faculty Sponsor==&lt;br /&gt;
If you are a PI or Lab Manager interested in the Hoffman2 Cluster, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.&lt;br /&gt;
&lt;br /&gt;
#Navigate to the [http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;Request to become faculty sponsor&amp;quot; (On the Bottom)&lt;br /&gt;
#Fill out the form with appropriate information.&lt;br /&gt;
&lt;br /&gt;
Under &#039;Reason&#039;, about any generic reason is appropriate for faculty members. For example, &amp;quot;To perform fMRI analysis.&amp;quot; will likely suffice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&amp;amp;JavaScript=enabled UCLA Grid Portal]&lt;br /&gt;
*[https://logon.ucla.edu UCLA BOL Home Page]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Hoffman2 Account Page]&lt;br /&gt;
*[[Hoffman2:Getting_an_Account-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=2926</id>
		<title>Hoffman2:Getting an Account</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&amp;diff=2926"/>
		<updated>2015-09-28T16:47:42Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Applying for the Account */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==Requesting Hoffman2 Account==&lt;br /&gt;
===What You Need===&lt;br /&gt;
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [https://logon.ucla.edu UCLA Logon] services page. Click on &amp;quot;Create UCLA Logon ID&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Applying for the Account===&lt;br /&gt;
 ATTENTION: If you are a member of another lab or are a PI interested in obtaining Hoffman access, please see the section&lt;br /&gt;
 [[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]]&lt;br /&gt;
#Navigate to the [http://www.hoffman2.idre.ucla.edu/getting-started/ Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;New User Registration&amp;quot;&lt;br /&gt;
#Authenticate using your UCLA BOL credentials&lt;br /&gt;
#Fill out the form with appropriate information. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer&#039;s lab), or your respective PI if they are a Faculty Sponsor on Hoffman.&lt;br /&gt;
;Proposed UserName&lt;br /&gt;
:This will be the username you use to sign into the cluster with.&lt;br /&gt;
;Select a Resource&lt;br /&gt;
:For the Mark Cohen/Susan Bookheimer labs, choose &amp;quot;Hoffman2&amp;quot;. However, you can request access to any cluster that is a member of the Grid Portal. &lt;br /&gt;
&lt;br /&gt;
Click Submit.&lt;br /&gt;
You will receive an email with a link to a temporary password. &#039;&#039;&#039;PLEASE WRITE IT DOWN.&#039;&#039;&#039; The link expires after 72 hours. If you missed the link or it expired, go back to the [http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Application Page] and click Forgot Your Cluster Password? It will take about a day for the cluster to resend you a new password. &lt;br /&gt;
&lt;br /&gt;
You can change your password once you&#039;ve logged in by using [[Hoffman2:Accessing_the_Cluster#Change_Passwords | passwd.]]&lt;br /&gt;
&lt;br /&gt;
==Becoming A Faculty Sponsor==&lt;br /&gt;
If you are a PI or Lab Manager interested in the Hoffman2 Cluster, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.&lt;br /&gt;
&lt;br /&gt;
#Navigate to the [http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Account Applications] page.&lt;br /&gt;
#Read over the application summary&lt;br /&gt;
#Click &amp;quot;Request to become faculty sponsor&amp;quot; (On the Bottom)&lt;br /&gt;
#Fill out the form with appropriate information.&lt;br /&gt;
&lt;br /&gt;
Under &#039;Reason&#039;, about any generic reason is appropriate for faculty members. For example, &amp;quot;To perform fMRI analysis.&amp;quot; will likely suffice.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&amp;amp;JavaScript=enabled UCLA Grid Portal]&lt;br /&gt;
*[https://logon.ucla.edu UCLA BOL Home Page]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Hoffman2 Account Page]&lt;br /&gt;
*[[Hoffman2:Getting_an_Account-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&amp;diff=2912</id>
		<title>Hoffman2:Submitting Jobs</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&amp;diff=2912"/>
		<updated>2015-06-25T23:22:50Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
If you remember from [[Hoffman2:Introduction#Sun Grid Engine|Anatomy of the Computing Cluster]], the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs.  It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.&lt;br /&gt;
&lt;br /&gt;
Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately.  And for the vast majority of people, this will be the case.&lt;br /&gt;
&lt;br /&gt;
Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available.  If your job needs these types of resources, you are probably at a level where reading this tutorial isn&#039;t very helpful.&lt;br /&gt;
&lt;br /&gt;
Ask for too little RAM or too little time and your job will be killed or end prematurely leaving you with no results to examine.&lt;br /&gt;
&lt;br /&gt;
Choose wisely.&lt;br /&gt;
&lt;br /&gt;
So how does one submit a computing job request?  You&#039;ve got some options:&lt;br /&gt;
# &#039;&#039;&#039;job.q&#039;&#039;&#039;&lt;br /&gt;
#: Use a simple tool that ATS wrote.  It has a menu and walks you through submitting things but has been known to possibly forget certain necessary flags.&lt;br /&gt;
# &#039;&#039;&#039;qsub&#039;&#039;&#039;&lt;br /&gt;
#: Get under the hood and do it yourself.  It can get messy but it can also be faster and you have more flexibility with options.&lt;br /&gt;
# &#039;&#039;&#039;command files&#039;&#039;&#039;&lt;br /&gt;
#: You&#039;ve graduated to a higher level of operations, but we can help you get there with examples of our own command files.&lt;br /&gt;
# &#039;&#039;&#039;job arrays&#039;&#039;&#039;&lt;br /&gt;
#: You&#039;ve got a lot of repetitive tasks to run, these will be your friend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aggregating Output Files==&lt;br /&gt;
By default, whenever you submit a job, the standard output and error files get created in whichever directory you submitted the job from unless you tell qsub differently with the &amp;quot;-o&amp;quot; and &amp;quot;-e&amp;quot; arguments.  &#039;&#039;&#039;This can be very annoying when trying to reduce your file count as output files can be everywhere.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is how you can avoid running around looking for these files:&lt;br /&gt;
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]&lt;br /&gt;
# Use your favorite [[Text Editors|text editor]] to edit the file &amp;lt;code&amp;gt;~/.sge_request&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ vim ~/.sge_request&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ emacs ~/.sge_request&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ nedit ~/.sge_request&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Insert this line into the file&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;-o $HOME/job-output-files/&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;A&amp;lt;/code&amp;gt; - capital A - to go to the end of the line and enter insert mode&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
# Save the file&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;ESC + &amp;quot;:wq&amp;quot; + ENTER&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c, y&amp;lt;/code&amp;gt;&lt;br /&gt;
#:* or use the menu system&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Use the menu.&lt;br /&gt;
# Now use the following command to create the special directory that will receive all of the output and error files for the jobs you run.&lt;br /&gt;
#: &amp;lt;pre&amp;gt;mkdir ~/job-output-files&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Make an edit to your ~/.bash_profile so that you can run [[Hoffman2:Interactive Sessions]] without a problem&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ vim ~/.bash_profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ emacs ~/.bash_profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ nedit ~/.bash_profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Insert this line at the &#039;&#039;&#039;bottom&#039;&#039;&#039; of the file&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;alias qrsh=&#039;qrsh -o /dev/null&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;G&amp;lt;/code&amp;gt; - capital G - to go to the end of the file&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;A&amp;lt;/code&amp;gt; - capital A - to go to the end of the line and enter insert mode&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;ENTER&amp;lt;/code&amp;gt; - to insert a newline&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]&lt;br /&gt;
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
# Save the file&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;ESC + &amp;quot;:wq&amp;quot; + ENTER&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c, y&amp;lt;/code&amp;gt;&lt;br /&gt;
#:* or use the menu system&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Use the menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==job.q==&lt;br /&gt;
Once you&#039;ve identified or written a script you&#039;d like to run, [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]] and enter &amp;lt;code&amp;gt;job.q&amp;lt;/code&amp;gt;.  Then it is just a matter of following its step-by-step instructions.&lt;br /&gt;
&lt;br /&gt;
From the tool&#039;s main menu, you can type &#039;&#039;Info&#039;&#039; to read up about how to use it and we highly encourage you to do so.&lt;br /&gt;
&lt;br /&gt;
But we know patience is a virtue that most of us aren&#039;t blessed with.  So we&#039;ll walk you through submitting a basic job so you can hit the ground running.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
# Once on Hoffman2, you&#039;ll need to edit one file so pull out your favorite [[Text Editors|text editor]] and edit the file&lt;br /&gt;
#: &amp;lt;pre&amp;gt;~/.queuerc&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Add the line&lt;br /&gt;
#: &amp;lt;pre&amp;gt;set qqodir = ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
# You&#039;ve just set the default directory where your job command files will be created. Save the configuration file and close your text editor.&lt;br /&gt;
# Make that directory using the command&lt;br /&gt;
#: &amp;lt;pre&amp;gt;$ mkdir ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Now execute&lt;br /&gt;
#:&amp;lt;pre&amp;gt;$ job.q&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).&lt;br /&gt;
# Type &#039;&#039;Build &amp;lt;ENTER&amp;gt;&#039;&#039; to begin creating an SGE command file.&lt;br /&gt;
# The program now asks you which script you&#039;d like to run, enter the following text to use our example script&lt;br /&gt;
#: &amp;lt;pre&amp;gt;/u/home/FMRI/apps/examples/qsub/gather.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
# The program now asks how much memory the job will need (in [http://en.wikipedia.org/wiki/Megabyte Megabytes]).  This script is really simple, so let&#039;s go with the minimum and enter &#039;&#039;64&#039;&#039;.&lt;br /&gt;
# The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.&lt;br /&gt;
# The program now asks if your job should be limited to only your resource group&#039;s cores. Answer &#039;&#039;n&#039;&#039; because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.&lt;br /&gt;
# Soon, the program will tell you that &#039;&#039;gather.sh.cmd&#039;&#039; has been built and saved.&lt;br /&gt;
# When it asks you if you would like to submit your job, say no.  Then type &#039;&#039;Quit &amp;lt;ENTER&amp;gt;&#039;&#039; to leave the program.&lt;br /&gt;
# Now you should be able to run&lt;br /&gt;
#: &amp;lt;pre&amp;gt;ls ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: and see &#039;&#039;gather.sh.cmd&#039;&#039;.  This file will stay there until you delete it and can be run over and over again.  Making a command file like this is especially useful if there is a task you&#039;ll be running repeatedly on Hoffman2.  But if this is something you only need to run once, you should delete the file so you don&#039;t needlessly approach your [[Hoffman2:Quotas|quota]].&lt;br /&gt;
# The time has come to actually run the program (thought we&#039;d never get to that, didn&#039;t you?). Type&lt;br /&gt;
#: &amp;lt;pre&amp;gt;qsub job-output/gather.sh.cmd&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: and after hitting enter, a message similar to this will pop up:&lt;br /&gt;
#: &amp;lt;pre&amp;gt;Your job 1882940 (&amp;quot;gather.sh.cmd&amp;quot;) has been submitted&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.&lt;br /&gt;
# Now you can check if the job has finished running by doing&lt;br /&gt;
#: &amp;lt;pre&amp;gt;ls ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
# When two files named &#039;&#039;gather.sh.output.[JOBID]&#039;&#039; and &#039;&#039;gather.sh.joblog.[JOBID]&#039;&#039; (where JOBID is your job&#039;s unique identifier) appear, your job has run.&lt;br /&gt;
#: &#039;&#039;gather.sh.output.[JOBID]&#039;&#039;&lt;br /&gt;
#:: This file has all the standard output generated by your script.  In this case it will just have the line&lt;br /&gt;
#::: &#039;&#039;Standard output would appear here.&#039;&#039;&lt;br /&gt;
#: &#039;&#039;gather.sh.joblog.[JOBID]&#039;&#039;&lt;br /&gt;
#:: This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.&lt;br /&gt;
# Better ways of checking on your job can be found [[Hoffman2:Monitoring Jobs|here]].&lt;br /&gt;
# The script you ran is an aggregator.  It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory.  This file is named &#039;&#039;gather-[TIMESTAMP].txt&#039;&#039; where TIMESTAMP is when the script was run and follows [http://en.wikipedia.org/wiki/ISO_8601 ISO 8601] style encoding. You are encouraged to type&lt;br /&gt;
#: &amp;lt;pre&amp;gt;/u/home/FMRI/apps/examples/qsub/gather.sh -h&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: or&lt;br /&gt;
#: &amp;lt;pre&amp;gt;/u/home/FMRI/apps/examples/qsub/gather.sh --help&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: to see how this script works.&lt;br /&gt;
# Finally, go check the inbox of the email you used to sign up for your Hoffman2 account.  There will be two emails from &amp;quot;root@mail.hoffman2.idre.ucla.edu&amp;quot; that indicate when the job was started and when the job was completed.  This is one of the neat features of the queue so that you can be alerted about the progress of your job without having to stay logged into Hoffman2 and checking on it constantly.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==qsub==&lt;br /&gt;
Everything that job.q did can be done on the command line.  And it can be done better.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Run the command:&lt;br /&gt;
 $ qsub -cwd -V -N J1 -l h_data=64M,express,h_rt=00:05:00 -M eplau -m bea /u/home/FMRI/apps/examples/qsub/gather.sh&lt;br /&gt;
&lt;br /&gt;
And something like the following will be printed out:&lt;br /&gt;
 Your job 1875395 (&amp;quot;J1&amp;quot;) has been submitted&lt;br /&gt;
&lt;br /&gt;
Where the number is your JOBID, a unique numerical identifier for your job.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s break down the arguments in that command.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-cwd&amp;lt;/code&amp;gt;&lt;br /&gt;
: Change working directory&lt;br /&gt;
: When your script runs, change the working directory to where you currently are in the filesystem.&lt;br /&gt;
:: e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it.  This means output and error directories will be placed here for that job.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-V&amp;lt;/code&amp;gt;&lt;br /&gt;
: Export environment variables&lt;br /&gt;
: Exports all the environment variables to the context of the job.  Useful if you have extra environment variables that are needed in your script.&lt;br /&gt;
:: e.g. If you had defined the variable SUBJECT_ID in your session on Hoffman2 (&amp;lt;code&amp;gt;export SUBJECT_ID=42&amp;lt;/code&amp;gt;) before submitting a job and that variable was called on by your script, then you would need to use this flag.  Tools like FreeSurfer look for certain environment variables to be set.&lt;br /&gt;
 &lt;br /&gt;
;&amp;lt;code&amp;gt;-N J1&amp;lt;/code&amp;gt;&lt;br /&gt;
: Name my job&lt;br /&gt;
: Names your job &amp;quot;J1.&amp;quot;  When you [[Hoffman2:Monitoring Jobs#qstat|look at the queue]], this will be the text that shows up in the &amp;quot;name&amp;quot; column.  This will also be the beginning of the output (&amp;lt;code&amp;gt;J1.o[JOBID]&amp;lt;/code&amp;gt;) and error (&amp;lt;code&amp;gt;J1.e[JOBID]&amp;lt;/code&amp;gt;) files for your job.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-l h_data=64M,express,time=00:05:00&amp;lt;/code&amp;gt;&lt;br /&gt;
: Resource allocation (that&#039;s a lower case &amp;quot;elle&amp;quot;)&lt;br /&gt;
: This is the resources flag meaning that the text immediately after it will ask for things like:&lt;br /&gt;
:* certain amount of memory, in [http://en.wikipedia.org/wiki/Megabyte Megabytes], or [http://en.wikipedia.org/wiki/Gigabyte Gigabytes]&lt;br /&gt;
:** h_data=64M (64 MB RAM) or h_data=1G (1 GB RAM)&lt;br /&gt;
:** &amp;quot;mem&amp;quot; no longer works &lt;br /&gt;
: In this case, our demands for RAM are really low, so we are requesting only 64MB.&lt;br /&gt;
: &#039;&#039;&#039;Edit (2013.09)&#039;&#039;&#039; - If your job uses more RAM than it requested, your job WILL be killed in order to avoid it hurting other jobs running on the same node. It is imperative that you set this RAM request properly.&lt;br /&gt;
:* certain length of computing time, in the form HH:MM:SS&lt;br /&gt;
:** h_rt=00:05:00    or&lt;br /&gt;
:** time=00:05:00&lt;br /&gt;
: In this case the script will complete its task rapidly, hence we are only asking for 5 minutes of computing time.&lt;br /&gt;
:* queue type, only a few options here&lt;br /&gt;
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#express express]&lt;br /&gt;
:**: Time limit of 2 hours, and it tends to be overloaded so it isn&#039;t recommended&lt;br /&gt;
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#highp highp]&lt;br /&gt;
:**: Job length maximum of 14 days but can only be run on nodes belonging to your resource group (type &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what type of resources you have available). If you are in the mscohen or sbook usergroups on Hoffman2, you have access to some of these highp nodes.&lt;br /&gt;
:** [blank] (nothing, nada, zilch)&lt;br /&gt;
:**: [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#day Standard queue], which has a maximum job length of 24 hours&lt;br /&gt;
: In this case, we are asking to be put on the express queue since this is such a short job, but the standard queue would have worked just as well if not better.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-M eplau&amp;lt;/code&amp;gt;&lt;br /&gt;
: Define mailing list&lt;br /&gt;
: This defines the list of users that will be mailed if email updates are requested.  The default address is that of the job-owner, but multiple emails can be specified using a comma separated list.&lt;br /&gt;
:: e.g. In this case, the email will be sent to the address on file for the user &amp;quot;eplau&amp;quot;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-m bea&amp;lt;/code&amp;gt;&lt;br /&gt;
: Define mailing rules&lt;br /&gt;
: This defines when Hoffman2 should email you about your job.  There are five options here&lt;br /&gt;
:* b - when the job begins&lt;br /&gt;
:* e - when the job ends&lt;br /&gt;
:* a - when the job is aborted&lt;br /&gt;
:* s - when the job is suspended&lt;br /&gt;
:* n - never&lt;br /&gt;
: The first four can be used in any combination, but the last obviously nullifies the others.&lt;br /&gt;
&lt;br /&gt;
There are many other flags that you could use, but these are the basics that will get you through most of your computing.  Feel free to explore the others in the [http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Command Files==&lt;br /&gt;
Typing accurately can be difficult at times, so why put yourself through the trouble of having to retype the same arguments over and over if you will always be using about the same values?  Enter command files.&lt;br /&gt;
&lt;br /&gt;
You already have experience making a command file (~/job-output/gather.sh.cmd) from when you used the tool &amp;lt;code&amp;gt;job.q&amp;lt;/code&amp;gt;.  But did you know that you can edit that command file to make changes to how it runs, or write your own?&lt;br /&gt;
&lt;br /&gt;
The command files generated by &amp;lt;code&amp;gt;job.q&amp;lt;/code&amp;gt; are fairly well commented, so if you take a look at them with your favorite [Text Editors|text editor] you should be able to change their behavior.  For instance, if you go into the command file from the job.q example, find the lines that say&lt;br /&gt;
 #  Notify at beginning and end of job&lt;br /&gt;
 #$ -m bea&lt;br /&gt;
You recognize that this is the flag about when to send email messages.  Go ahead and change this to&lt;br /&gt;
 # Notify at the end and on abort&lt;br /&gt;
 #$ -m ae&lt;br /&gt;
And you should only receive one email when your job finishes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===q.sh===&lt;br /&gt;
You could make a generic command file that contains all the basic flags that you care about.  We&#039;ve even got an example ready and available for you at&lt;br /&gt;
 /u/home/FMRI/apps/examples/qsub/q.sh&lt;br /&gt;
The script contents are shown below:&lt;br /&gt;
 qsub &amp;lt;&amp;lt;CMD&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # Use current working directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # Error stream is merged with the standard output&lt;br /&gt;
 #$ -j y&lt;br /&gt;
 # Use the bash shell for job execution&lt;br /&gt;
 #$ -S /bin/bash&lt;br /&gt;
 # Use your normal environment variables in the job&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # Use 1GB of RAM and the main queue, with a maximum of 2 hours computing time&lt;br /&gt;
 #$ -l h_data=1024M,h_rt=2:00:00&lt;br /&gt;
 $@&lt;br /&gt;
 CMD&lt;br /&gt;
To use this command file to submit the &#039;&#039;gather.sh&#039;&#039; example script, you would execute the command:&lt;br /&gt;
 $ q.sh gather.sh&lt;br /&gt;
You can do this because if you have [[Hoffman2:Software Tools#Setting Up Your Account to Access the Tools|set up your Bash profile correctly]], they are in your [[Hoffman2:UNIX Tutorial#PATH|Unix PATH variable]].  You can replace &#039;&#039;gather.sh&#039;&#039; with any script you want executed and it will be submitted as a job on the cluster.  We recommend that you make your own copy of &#039;&#039;q.sh&#039;&#039; and keep it in your local &#039;&#039;bin&#039;&#039; directory (~/bin) so that you can edit it to suit your needs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Job Arrays==&lt;br /&gt;
There is an SGE qsub argument that allows you to submit multiple jobs in parallel that use the same script.  It is&lt;br /&gt;
 -t lower-upper:interval&lt;br /&gt;
where&lt;br /&gt;
;&amp;lt;code&amp;gt;lower&amp;lt;/code&amp;gt;&lt;br /&gt;
: is replaced with the starting number&lt;br /&gt;
;&amp;lt;code&amp;gt;upper&amp;lt;/code&amp;gt;&lt;br /&gt;
: is replaced with the ending number&lt;br /&gt;
;&amp;lt;code&amp;gt;interval&amp;lt;/code&amp;gt;&lt;br /&gt;
: is replaced with the step interval&lt;br /&gt;
So adding the argument&lt;br /&gt;
 -t 10-100:5&lt;br /&gt;
will step through the numbers 10, 15, 20, 25, ..., 100 submitting a job for each one.&lt;br /&gt;
&lt;br /&gt;
In jobs that are called with this flag, there will be an [[Hoffman2:UNIX Tutorial#Environment Variables|environment variable]] called &amp;lt;code&amp;gt;SGE_TASK_ID&amp;lt;/code&amp;gt; whose value will be incremented over the range you specified.  Each possible value of SGE_TASK_ID will be submitted as its own job, so your work will be parallelized.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
Why would anyone use this?  Here are some examples&lt;br /&gt;
&lt;br /&gt;
====Lots of numbers====&lt;br /&gt;
Let&#039;s say you have a script, &#039;&#039;&#039;myFunc.sh&#039;&#039;&#039;, that takes one numerical input and computes a bunch of values based on that input.  But you need to run &amp;lt;code&amp;gt;myFunc.sh&amp;lt;/code&amp;gt; for input values 1 to 100.  One solution would be to write a wrapper script &#039;&#039;&#039;myFuncSlowWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFuncSlowWrapper.sh&lt;br /&gt;
 for i in {1..100};&lt;br /&gt;
 do&lt;br /&gt;
     myFunc.sh $i;&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
The only drawback is that this will take quite a while since all 100 iterations will be done on a single processor.  With job arrays, the computations will be split among many processors and can finish much more quickly.  You would instead write a wrapper script called &#039;&#039;&#039;myFuncFastWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFuncFastWrapper.sh&lt;br /&gt;
 echo $SGE_TASK_ID&lt;br /&gt;
 myFunc.sh $SGE_TASK_ID&lt;br /&gt;
&lt;br /&gt;
And submit it with&lt;br /&gt;
 qsub -cwd -V -N PJ -l h_data=1024M,h_rt=01:00:00 -M eplau -m bea -t 1-100:1 myFuncWrapper.sh&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
====Lots of files====&lt;br /&gt;
Let&#039;s say you have a script, &#039;&#039;&#039;myFunc2.sh&#039;&#039;&#039;, that takes the name of a file as input and opens that file and runs a bunch of computations on its contents.  But you have 100 such files to process.  One solution would be to write a wrapper script &#039;&#039;&#039;myFunc2SlowWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFunc2SlowWrapper.sh&lt;br /&gt;
 for FILE in `ls dir/of/files`;&lt;br /&gt;
 do&lt;br /&gt;
     myFunc2.sh $FILE&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
But this will take quite a while since all 100 iterations will be done on a single processor. With job arrays, the computations will be split among many processors since they are submitted as their own jobs and can finish much more quickly.  You could instead create a file that contains a list of all 100 files that need to be processed and call it &#039;&#039;&#039;filesToProcess&#039;&#039;&#039;. Then write a wrapper script called &#039;&#039;&#039;myFunc2FastWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFunc2FastWrapper.sh&lt;br /&gt;
 echo $SGE_TASK_ID&lt;br /&gt;
 myFunc2.sh `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`&lt;br /&gt;
&lt;br /&gt;
where you replace &#039;&#039;/path/to/list/of/file&#039;&#039; with the path to &#039;&#039;&#039;fileToProcess&#039;&#039;&#039;.  The code&lt;br /&gt;
 `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`&lt;br /&gt;
uses &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt; to grab the ${SGE_TASK_ID}&#039;th line from the file &#039;&#039;&#039;/path/to/list/of/files&#039;&#039;&#039; and returns it (thanks to the tick marks, &amp;lt;code&amp;gt;SHIFT + ~&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Then you&#039;d submit it with&lt;br /&gt;
 qsub -cwd -V -N PJ -l h_data=1024M,express,h_rt=01:00:00 -M eplau -m bea -t 1-100:1 myFunc2Wrapper.sh&lt;br /&gt;
&lt;br /&gt;
If your files were named regularly with a &#039;-number&#039; at the end (e.g. &#039;file-1&#039;, &#039;file-2&#039;, &#039;file-3&#039;, ... &#039;file-n&#039;), you could just make &#039;&#039;&#039;myFunc2FastWrapperB.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFunc2FastWrapperB.sh&lt;br /&gt;
 echo $SGE_TASK_ID&lt;br /&gt;
 myFunc2.sh file-${SGE_TASK_ID}&lt;br /&gt;
and submit it the same way.&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page]&lt;br /&gt;
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Types of Queues]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&amp;diff=2911</id>
		<title>Hoffman2:Submitting Jobs</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&amp;diff=2911"/>
		<updated>2015-06-25T23:21:00Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
If you remember from [[Hoffman2:Introduction#Sun Grid Engine|Anatomy of the Computing Cluster]], the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs.  It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.&lt;br /&gt;
&lt;br /&gt;
Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately.  And for the vast majority of people, this will be the case.&lt;br /&gt;
&lt;br /&gt;
Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available.  If your job needs these types of resources, you are probably at a level where reading this tutorial isn&#039;t very helpful.&lt;br /&gt;
&lt;br /&gt;
Ask for too little RAM or too little time and your job will be killed or end prematurely leaving you with no results to examine.&lt;br /&gt;
&lt;br /&gt;
Choose wisely.&lt;br /&gt;
&lt;br /&gt;
So how does one submit a computing job request?  You&#039;ve got some options:&lt;br /&gt;
# &#039;&#039;&#039;job.q&#039;&#039;&#039;&lt;br /&gt;
#: Use a simple tool that ATS wrote.  It has a menu and walks you through submitting things but has been known to possibly forget certain necessary flags.&lt;br /&gt;
# &#039;&#039;&#039;qsub&#039;&#039;&#039;&lt;br /&gt;
#: Get under the hood and do it yourself.  It can get messy but it can also be faster and you have more flexibility with options.&lt;br /&gt;
# &#039;&#039;&#039;command files&#039;&#039;&#039;&lt;br /&gt;
#: You&#039;ve graduated to a higher level of operations, but we can help you get there with examples of our own command files.&lt;br /&gt;
# &#039;&#039;&#039;job arrays&#039;&#039;&#039;&lt;br /&gt;
#: You&#039;ve got a lot of repetitive tasks to run, these will be your friend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aggregating Output Files==&lt;br /&gt;
By default, whenever you submit a job, the standard output and error files get created in whichever directory you submitted the job from unless you tell qsub differently with the &amp;quot;-o&amp;quot; and &amp;quot;-e&amp;quot; arguments.  &#039;&#039;&#039;This can be very annoying when trying to reduce your file count as output files can be everywhere.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is how you can avoid running around looking for these files:&lt;br /&gt;
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]&lt;br /&gt;
# Use your favorite [[Text Editors|text editor]] to edit the file &amp;lt;code&amp;gt;~/.sge_request&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ vim ~/.sge_request&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ emacs ~/.sge_request&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ nedit ~/.sge_request&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Insert this line into the file&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;-o $HOME/job-output-files/&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;A&amp;lt;/code&amp;gt; - capital A - to go to the end of the line and enter insert mode&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
# Save the file&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;ESC + &amp;quot;:wq&amp;quot; + ENTER&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c, y&amp;lt;/code&amp;gt;&lt;br /&gt;
#:* or use the menu system&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Use the menu.&lt;br /&gt;
# Now use the following command to create the special directory that will receive all of the output and error files for the jobs you run.&lt;br /&gt;
#: &amp;lt;pre&amp;gt;mkdir ~/job-output-files&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Make an edit to your ~/.bash_profile so that you can run [[Hoffman2:Interactive Sessions]] without a problem&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ vim ~/.bash_profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ emacs ~/.bash_profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;$ nedit ~/.bash_profile&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Insert this line at the &#039;&#039;&#039;bottom&#039;&#039;&#039; of the file&lt;br /&gt;
#:* &amp;lt;pre&amp;gt;alias qrsh=&#039;qrsh -o /dev/null&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;G&amp;lt;/code&amp;gt; - capital G - to go to the end of the file&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;A&amp;lt;/code&amp;gt; - capital A - to go to the end of the line and enter insert mode&lt;br /&gt;
#:* Type &amp;lt;code&amp;gt;ENTER&amp;lt;/code&amp;gt; - to insert a newline&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]&lt;br /&gt;
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.&lt;br /&gt;
#:* Type or paste in the specified lines.&lt;br /&gt;
# Save the file&lt;br /&gt;
#: [[Text Editors#Vim (H2) (OSX)|VIM]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;ESC + &amp;quot;:wq&amp;quot; + ENTER&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c&amp;lt;/code&amp;gt;&lt;br /&gt;
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]&lt;br /&gt;
#:* &amp;lt;code&amp;gt;CTRL+x, CTRL+c, y&amp;lt;/code&amp;gt;&lt;br /&gt;
#:* or use the menu system&lt;br /&gt;
#: [[Text Editors#NEdit (H2)|NEdit]]&lt;br /&gt;
#:* Use the menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==job.q==&lt;br /&gt;
Once you&#039;ve identified or written a script you&#039;d like to run, [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]] and enter &amp;lt;code&amp;gt;job.q&amp;lt;/code&amp;gt;.  Then it is just a matter of following its step-by-step instructions.&lt;br /&gt;
&lt;br /&gt;
From the tool&#039;s main menu, you can type &#039;&#039;Info&#039;&#039; to read up about how to use it and we highly encourage you to do so.&lt;br /&gt;
&lt;br /&gt;
But we know patience is a virtue that most of us aren&#039;t blessed with.  So we&#039;ll walk you through submitting a basic job so you can hit the ground running.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
# Once on Hoffman2, you&#039;ll need to edit one file so pull out your favorite [[Text Editors|text editor]] and edit the file&lt;br /&gt;
#: &amp;lt;pre&amp;gt;~/.queuerc&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Add the line&lt;br /&gt;
#: &amp;lt;pre&amp;gt;set qqodir = ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
# You&#039;ve just set the default directory where your job command files will be created. Save the configuration file and close your text editor.&lt;br /&gt;
# Make that directory using the command&lt;br /&gt;
#: &amp;lt;pre&amp;gt;$ mkdir ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Now execute&lt;br /&gt;
#:&amp;lt;pre&amp;gt;$ job.q&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).&lt;br /&gt;
# Type &#039;&#039;Build &amp;lt;ENTER&amp;gt;&#039;&#039; to begin creating an SGE command file.&lt;br /&gt;
# The program now asks you which script you&#039;d like to run, enter the following text to use our example script&lt;br /&gt;
#: &amp;lt;pre&amp;gt;/u/home/FMRI/apps/examples/qsub/gather.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
# The program now asks how much memory the job will need (in [http://en.wikipedia.org/wiki/Megabyte Megabytes]).  This script is really simple, so let&#039;s go with the minimum and enter &#039;&#039;64&#039;&#039;.&lt;br /&gt;
# The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.&lt;br /&gt;
# The program now asks if your job should be limited to only your resource group&#039;s cores. Answer &#039;&#039;n&#039;&#039; because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.&lt;br /&gt;
# Soon, the program will tell you that &#039;&#039;gather.sh.cmd&#039;&#039; has been built and saved.&lt;br /&gt;
# When it asks you if you would like to submit your job, say no.  Then type &#039;&#039;Quit &amp;lt;ENTER&amp;gt;&#039;&#039; to leave the program.&lt;br /&gt;
# Now you should be able to run&lt;br /&gt;
#: &amp;lt;pre&amp;gt;ls ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: and see &#039;&#039;gather.sh.cmd&#039;&#039;.  This file will stay there until you delete it and can be run over and over again.  Making a command file like this is especially useful if there is a task you&#039;ll be running repeatedly on Hoffman2.  But if this is something you only need to run once, you should delete the file so you don&#039;t needlessly approach your [[Hoffman2:Quotas|quota]].&lt;br /&gt;
# The time has come to actually run the program (thought we&#039;d never get to that, didn&#039;t you?). Type&lt;br /&gt;
#: &amp;lt;pre&amp;gt;qsub job-output/gather.sh.cmd&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: and after hitting enter, a message similar to this will pop up:&lt;br /&gt;
#: &amp;lt;pre&amp;gt;Your job 1882940 (&amp;quot;gather.sh.cmd&amp;quot;) has been submitted&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.&lt;br /&gt;
# Now you can check if the job has finished running by doing&lt;br /&gt;
#: &amp;lt;pre&amp;gt;ls ~/job-output&amp;lt;/pre&amp;gt;&lt;br /&gt;
# When two files named &#039;&#039;gather.sh.output.[JOBID]&#039;&#039; and &#039;&#039;gather.sh.joblog.[JOBID]&#039;&#039; (where JOBID is your job&#039;s unique identifier) appear, your job has run.&lt;br /&gt;
#: &#039;&#039;gather.sh.output.[JOBID]&#039;&#039;&lt;br /&gt;
#:: This file has all the standard output generated by your script.  In this case it will just have the line&lt;br /&gt;
#::: &#039;&#039;Standard output would appear here.&#039;&#039;&lt;br /&gt;
#: &#039;&#039;gather.sh.joblog.[JOBID]&#039;&#039;&lt;br /&gt;
#:: This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.&lt;br /&gt;
# Better ways of checking on your job can be found [[Hoffman2:Monitoring Jobs|here]].&lt;br /&gt;
# The script you ran is an aggregator.  It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory.  This file is named &#039;&#039;gather-[TIMESTAMP].txt&#039;&#039; where TIMESTAMP is when the script was run and follows [http://en.wikipedia.org/wiki/ISO_8601 ISO 8601] style encoding. You are encouraged to type&lt;br /&gt;
#: &amp;lt;pre&amp;gt;/u/home/FMRI/apps/examples/qsub/gather.sh -h&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: or&lt;br /&gt;
#: &amp;lt;pre&amp;gt;/u/home/FMRI/apps/examples/qsub/gather.sh --help&amp;lt;/pre&amp;gt;&lt;br /&gt;
#: to see how this script works.&lt;br /&gt;
# Finally, go check the inbox of the email you used to sign up for your Hoffman2 account.  There will be two emails from &amp;quot;root@mail.hoffman2.idre.ucla.edu&amp;quot; that indicate when the job was started and when the job was completed.  This is one of the neat features of the queue so that you can be alerted about the progress of your job without having to stay logged into Hoffman2 and checking on it constantly.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==qsub==&lt;br /&gt;
Everything that job.q did can be done on the command line.  And it can be done better.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Run the command:&lt;br /&gt;
 $ qsub -cwd -V -N J1 -l h_data=64M,express,h_rt=00:05:00 -M eplau -m bea /u/home/FMRI/apps/examples/qsub/gather.sh&lt;br /&gt;
&lt;br /&gt;
And something like the following will be printed out:&lt;br /&gt;
 Your job 1875395 (&amp;quot;J1&amp;quot;) has been submitted&lt;br /&gt;
&lt;br /&gt;
Where the number is your JOBID, a unique numerical identifier for your job.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s break down the arguments in that command.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-cwd&amp;lt;/code&amp;gt;&lt;br /&gt;
: Change working directory&lt;br /&gt;
: When your script runs, change the working directory to where you currently are in the filesystem.&lt;br /&gt;
:: e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it.  This means output and error directories will be placed here for that job.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-V&amp;lt;/code&amp;gt;&lt;br /&gt;
: Export environment variables&lt;br /&gt;
: Exports all the environment variables to the context of the job.  Useful if you have extra environment variables that are needed in your script.&lt;br /&gt;
:: e.g. If you had defined the variable SUBJECT_ID in your session on Hoffman2 (&amp;lt;code&amp;gt;export SUBJECT_ID=42&amp;lt;/code&amp;gt;) before submitting a job and that variable was called on by your script, then you would need to use this flag.  Tools like FreeSurfer look for certain environment variables to be set.&lt;br /&gt;
 &lt;br /&gt;
;&amp;lt;code&amp;gt;-N J1&amp;lt;/code&amp;gt;&lt;br /&gt;
: Name my job&lt;br /&gt;
: Names your job &amp;quot;J1.&amp;quot;  When you [[Hoffman2:Monitoring Jobs#qstat|look at the queue]], this will be the text that shows up in the &amp;quot;name&amp;quot; column.  This will also be the beginning of the output (&amp;lt;code&amp;gt;J1.o[JOBID]&amp;lt;/code&amp;gt;) and error (&amp;lt;code&amp;gt;J1.e[JOBID]&amp;lt;/code&amp;gt;) files for your job.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-l h_data=64M,express,time=00:05:00&amp;lt;/code&amp;gt;&lt;br /&gt;
: Resource allocation (that&#039;s a lower case &amp;quot;elle&amp;quot;)&lt;br /&gt;
: This is the resources flag meaning that the text immediately after it will ask for things like:&lt;br /&gt;
:* certain amount of memory, in [http://en.wikipedia.org/wiki/Megabyte Megabytes], or [http://en.wikipedia.org/wiki/Gigabyte Gigabytes]&lt;br /&gt;
:** h_data=64M (64 MB RAM) or h_data=1G (1 GB RAM)&lt;br /&gt;
:** &amp;quot;mem&amp;quot; no longer works &lt;br /&gt;
: In this case, our demands for RAM are really low, so we are requesting only 64MB.&lt;br /&gt;
: &#039;&#039;&#039;Edit (2013.09)&#039;&#039;&#039; - If your job uses more RAM than it requested, your job WILL be killed in order to avoid it hurting other jobs running on the same node. It is imperative that you set this RAM request properly.&lt;br /&gt;
:* certain length of computing time, in the form HH:MM:SS&lt;br /&gt;
:** h_rt=00:05:00    or&lt;br /&gt;
:** time=00:05:00&lt;br /&gt;
: In this case the script will complete its task rapidly, hence we are only asking for 5 minutes of computing time.&lt;br /&gt;
:* queue type, only a few options here&lt;br /&gt;
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#express express]&lt;br /&gt;
:**: Time limit of 2 hours, and it tends to be overloaded so it isn&#039;t recommended&lt;br /&gt;
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#highp highp]&lt;br /&gt;
:**: Job length maximum of 14 days but can only be run on nodes belonging to your resource group (type &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what type of resources you have available). If you are in the mscohen or sbook usergroups on Hoffman2, you have access to some of these highp nodes.&lt;br /&gt;
:** [blank] (nothing, nada, zilch)&lt;br /&gt;
:**: [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#day Standard queue], which has a maximum job length of 24 hours&lt;br /&gt;
: In this case, we are asking to be put on the express queue since this is such a short job, but the standard queue would have worked just as well if not better.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-M eplau&amp;lt;/code&amp;gt;&lt;br /&gt;
: Define mailing list&lt;br /&gt;
: This defines the list of users that will be mailed if email updates are requested.  The default address is that of the job-owner, but multiple emails can be specified using a comma separated list.&lt;br /&gt;
:: e.g. In this case, the email will be sent to the address on file for the user &amp;quot;eplau&amp;quot;&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;-m bea&amp;lt;/code&amp;gt;&lt;br /&gt;
: Define mailing rules&lt;br /&gt;
: This defines when Hoffman2 should email you about your job.  There are five options here&lt;br /&gt;
:* b - when the job begins&lt;br /&gt;
:* e - when the job ends&lt;br /&gt;
:* a - when the job is aborted&lt;br /&gt;
:* s - when the job is suspended&lt;br /&gt;
:* n - never&lt;br /&gt;
: The first four can be used in any combination, but the last obviously nullifies the others.&lt;br /&gt;
&lt;br /&gt;
There are many other flags that you could use, but these are the basics that will get you through most of your computing.  Feel free to explore the others in the [http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Command Files==&lt;br /&gt;
Typing accurately can be difficult at times, so why put yourself through the trouble of having to retype the same arguments over and over if you will always be using about the same values?  Enter command files.&lt;br /&gt;
&lt;br /&gt;
You already have experience making a command file (~/job-output/gather.sh.cmd) from when you used the tool &amp;lt;code&amp;gt;job.q&amp;lt;/code&amp;gt;.  But did you know that you can edit that command file to make changes to how it runs, or write your own?&lt;br /&gt;
&lt;br /&gt;
The command files generated by &amp;lt;code&amp;gt;job.q&amp;lt;/code&amp;gt; are fairly well commented, so if you take a look at them with your favorite [Text Editors|text editor] you should be able to change their behavior.  For instance, if you go into the command file from the job.q example, find the lines that say&lt;br /&gt;
 #  Notify at beginning and end of job&lt;br /&gt;
 #$ -m bea&lt;br /&gt;
You recognize that this is the flag about when to send email messages.  Go ahead and change this to&lt;br /&gt;
 # Notify at the end and on abort&lt;br /&gt;
 #$ -m ae&lt;br /&gt;
And you should only receive one email when your job finishes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===q.sh===&lt;br /&gt;
You could make a generic command file that contains all the basic flags that you care about.  We&#039;ve even got an example ready and available for you at&lt;br /&gt;
 /u/home/FMRI/apps/examples/qsub/q.sh&lt;br /&gt;
The script contents are shown below:&lt;br /&gt;
 qsub &amp;lt;&amp;lt;CMD&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # Use current working directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # Error stream is merged with the standard output&lt;br /&gt;
 #$ -j y&lt;br /&gt;
 # Use the bash shell for job execution&lt;br /&gt;
 #$ -S /bin/bash&lt;br /&gt;
 # Use your normal environment variables in the job&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # Use 1GB of RAM and the main queue, with a maximum of 2 hours computing time&lt;br /&gt;
 #$ -l h_data=1024M,h_rt=2:00:00&lt;br /&gt;
 $@&lt;br /&gt;
 CMD&lt;br /&gt;
To use this command file to submit the &#039;&#039;gather.sh&#039;&#039; example script, you would execute the command:&lt;br /&gt;
 $ q.sh gather.sh&lt;br /&gt;
You can do this because if you have [[Hoffman2:Software Tools#Setting Up Your Account to Access the Tools|set up your Bash profile correctly]], they are in your [[Hoffman2:UNIX Tutorial#PATH|Unix PATH variable]].  You can replace &#039;&#039;gather.sh&#039;&#039; with any script you want executed and it will be submitted as a job on the cluster.  We recommend that you make your own copy of &#039;&#039;q.sh&#039;&#039; and keep it in your local &#039;&#039;bin&#039;&#039; directory (~/bin) so that you can edit it to suit your needs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Job Arrays==&lt;br /&gt;
There is an SGE qsub argument that allows you to submit multiple jobs in parallel that use the same script.  It is&lt;br /&gt;
 -t lower-upper:interval&lt;br /&gt;
where&lt;br /&gt;
;&amp;lt;code&amp;gt;lower&amp;lt;/code&amp;gt;&lt;br /&gt;
: is replaced with the starting number&lt;br /&gt;
;&amp;lt;code&amp;gt;upper&amp;lt;/code&amp;gt;&lt;br /&gt;
: is replaced with the ending number&lt;br /&gt;
;&amp;lt;code&amp;gt;interval&amp;lt;/code&amp;gt;&lt;br /&gt;
: is replaced with the step interval&lt;br /&gt;
So adding the argument&lt;br /&gt;
 -t 10-100:5&lt;br /&gt;
will step through the numbers 10, 15, 20, 25, ..., 100 submitting a job for each one.&lt;br /&gt;
&lt;br /&gt;
In jobs that are called with this flag, there will be an [[Hoffman2:UNIX Tutorial#Environment Variables|environment variable]] called &amp;lt;code&amp;gt;SGE_TASK_ID&amp;lt;/code&amp;gt; whose value will be incremented over the range you specified.  Each possible value of SGE_TASK_ID will be submitted as its own job, so your work will be parallelized.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
Why would anyone use this?  Here are some examples&lt;br /&gt;
&lt;br /&gt;
====Lots of numbers====&lt;br /&gt;
Let&#039;s say you have a script, &#039;&#039;&#039;myFunc.sh&#039;&#039;&#039;, that takes one numerical input and computes a bunch of values based on that input.  But you need to run &amp;lt;code&amp;gt;myFunc.sh&amp;lt;/code&amp;gt; for input values 1 to 100.  One solution would be to write a wrapper script &#039;&#039;&#039;myFuncSlowWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFuncSlowWrapper.sh&lt;br /&gt;
 for i in {1..100};&lt;br /&gt;
 do&lt;br /&gt;
     myFunc.sh $i;&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
The only drawback is that this will take quite a while since all 100 iterations will be done on a single processor.  With job arrays, the computations will be split among many processors and can finish much more quickly.  You would instead write a wrapper script called &#039;&#039;&#039;myFuncFastWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFuncFastWrapper.sh&lt;br /&gt;
 echo $SGE_TASK_ID&lt;br /&gt;
 myFunc.sh $SGE_TASK_ID&lt;br /&gt;
&lt;br /&gt;
And submit it with&lt;br /&gt;
 qsub -cwd -V -N PJ -l h_data=1024M,express,h_rt=01:00:00 -M eplau -m bea -t 1-100:1 myFuncWrapper.sh&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
====Lots of files====&lt;br /&gt;
Let&#039;s say you have a script, &#039;&#039;&#039;myFunc2.sh&#039;&#039;&#039;, that takes the name of a file as input and opens that file and runs a bunch of computations on its contents.  But you have 100 such files to process.  One solution would be to write a wrapper script &#039;&#039;&#039;myFunc2SlowWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFunc2SlowWrapper.sh&lt;br /&gt;
 for FILE in `ls dir/of/files`;&lt;br /&gt;
 do&lt;br /&gt;
     myFunc2.sh $FILE&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
But this will take quite a while since all 100 iterations will be done on a single processor. With job arrays, the computations will be split among many processors since they are submitted as their own jobs and can finish much more quickly.  You could instead create a file that contains a list of all 100 files that need to be processed and call it &#039;&#039;&#039;filesToProcess&#039;&#039;&#039;. Then write a wrapper script called &#039;&#039;&#039;myFunc2FastWrapper.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFunc2FastWrapper.sh&lt;br /&gt;
 echo $SGE_TASK_ID&lt;br /&gt;
 myFunc2.sh `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`&lt;br /&gt;
&lt;br /&gt;
where you replace &#039;&#039;/path/to/list/of/file&#039;&#039; with the path to &#039;&#039;&#039;fileToProcess&#039;&#039;&#039;.  The code&lt;br /&gt;
 `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`&lt;br /&gt;
uses &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt; to grab the ${SGE_TASK_ID}&#039;th line from the file &#039;&#039;&#039;/path/to/list/of/files&#039;&#039;&#039; and returns it (thanks to the tick marks, &amp;lt;code&amp;gt;SHIFT + ~&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Then you&#039;d submit it with&lt;br /&gt;
 qsub -cwd -V -N PJ -l h_data=1024M,express,h_rt=01:00:00 -M eplau -m bea -t 1-100:1 myFunc2Wrapper.sh&lt;br /&gt;
&lt;br /&gt;
If your files were named regularly with a &#039;-number&#039; at the end (e.g. &#039;file-1&#039;, &#039;file-2&#039;, &#039;file-3&#039;, ... &#039;file-n&#039;), you could just make &#039;&#039;&#039;myFunc2FastWrapperB.sh&#039;&#039;&#039; as&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # myFunc2FastWrapperB.sh&lt;br /&gt;
 echo $SGE_TASK_ID&lt;br /&gt;
 myFunc2.sh file-${SGE_TASK_ID}&lt;br /&gt;
and submit it the same way.&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page]&lt;br /&gt;
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Types of Queues]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Staglin_Panic_Button&amp;diff=2910</id>
		<title>Staglin Panic Button</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Staglin_Panic_Button&amp;diff=2910"/>
		<updated>2015-04-01T18:37:45Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Staglin Panic Button */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
=== Prowl Install Instructions ===&lt;br /&gt;
(IPHONES ONLY)&lt;br /&gt;
&lt;br /&gt;
(1)  You&#039;ll need to install the &amp;quot;Prowl&amp;quot; app from the App Store.  It runs $3, but it&#039;s a worthy price for the notification system.  The icon is a thin black cat with yellow eyes.&lt;br /&gt;
&lt;br /&gt;
(2) Once that is installed, you&#039;ll need a Prowl account.  Available for free here: https://www.prowlapp.com/register.php&lt;br /&gt;
&lt;br /&gt;
(3) Navigate to the &amp;quot;API Keys&amp;quot; tab of the Prowl website and generate a new API key.  Send this key value to andrew.y.cho@ucla.edu and it&#039;ll be added to the list.&lt;br /&gt;
&lt;br /&gt;
(4) For the application preferences of Prowl, make sure that notifications are turned on.  There are other settings like &amp;quot;Quiet Hours&amp;quot; that Prowl allows you to configure.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Staglin_Panic_Button&amp;diff=2909</id>
		<title>Staglin Panic Button</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Staglin_Panic_Button&amp;diff=2909"/>
		<updated>2015-04-01T18:37:35Z</updated>

		<summary type="html">&lt;p&gt;Acho: Created page with &amp;quot;== Staglin Panic Button ==  === Prowl Install Instructions === (IPHONES ONLY)  (1)  You&amp;#039;ll need to install the &amp;quot;Prowl&amp;quot; app from the App Store.  It runs $3, but it&amp;#039;s a worthy p...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Staglin Panic Button ==&lt;br /&gt;
&lt;br /&gt;
=== Prowl Install Instructions ===&lt;br /&gt;
(IPHONES ONLY)&lt;br /&gt;
&lt;br /&gt;
(1)  You&#039;ll need to install the &amp;quot;Prowl&amp;quot; app from the App Store.  It runs $3, but it&#039;s a worthy price for the notification system.  The icon is a thin black cat with yellow eyes.&lt;br /&gt;
&lt;br /&gt;
(2) Once that is installed, you&#039;ll need a Prowl account.  Available for free here: https://www.prowlapp.com/register.php&lt;br /&gt;
&lt;br /&gt;
(3) Navigate to the &amp;quot;API Keys&amp;quot; tab of the Prowl website and generate a new API key.  Send this key value to andrew.y.cho@ucla.edu and it&#039;ll be added to the list.&lt;br /&gt;
&lt;br /&gt;
(4) For the application preferences of Prowl, make sure that notifications are turned on.  There are other settings like &amp;quot;Quiet Hours&amp;quot; that Prowl allows you to configure.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB_Licenses&amp;diff=2817</id>
		<title>Hoffman2:MATLAB Licenses</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB_Licenses&amp;diff=2817"/>
		<updated>2014-10-31T01:35:37Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
This page describes how to setup pulling licenses from Hoffman2 onto your local machine.&lt;br /&gt;
&lt;br /&gt;
== Windows ==&lt;br /&gt;
1. Install Cwygin with the instructions [http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_cygwin_ssh.htm#EMBSC152 here].&lt;br /&gt;
ONLY FOLLOW  SECTION 5.3 - Installing Cygwin&lt;br /&gt;
&lt;br /&gt;
2. Move win-matlab.sh [Ask your adminstrator for this file] into C:/cygwin64/usr/local/bin&lt;br /&gt;
&lt;br /&gt;
3. Make sure Matlab is installed in Programs (Default Settings)&lt;br /&gt;
&lt;br /&gt;
4. Open up Cywgin and type [ win-matlab.sh ] and it should ask for your hoffman2 credentials. &lt;br /&gt;
- You will be entering your password several times. So no, you didn&#039;t type it incorrectly. &lt;br /&gt;
&lt;br /&gt;
== Macs/Linux ==&lt;br /&gt;
1. Make sure Matlab is installed in your Applications Folder (Mac OSX) or bin folder (Unix)&lt;br /&gt;
&lt;br /&gt;
2. Make sure you have the matlab script on your desktop, or somewhere easily accessible [Ask your administrator for this file]. &lt;br /&gt;
&lt;br /&gt;
3. Open up Terminal, go to the directory you have your script, and run matlab script [ sh mac-matlab.sh].&lt;br /&gt;
- You will be entering your password several times. So don&#039;t worry, you type it correctly the first time.&lt;br /&gt;
&lt;br /&gt;
== Hoffman2 ==&lt;br /&gt;
1. Log onto Hoffman2 using SSH. Please make sure you have GUI X11-window installed. [[Hoffman2:Accessing_the_Cluster | More Information Here]]&lt;br /&gt;
&lt;br /&gt;
2. Type in [ matlab ]. It will ask how long you want your session to last. Enter in however many hours you will use Matlab for. Be warned! This is a hard limit and is ENFORCED strictly. You will be kicked off of matlab when your time runs out.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB_Licenses&amp;diff=2816</id>
		<title>Hoffman2:MATLAB Licenses</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB_Licenses&amp;diff=2816"/>
		<updated>2014-10-31T01:18:40Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
This page describes how to setup pulling licenses from Hoffman2 onto your local machine.&lt;br /&gt;
&lt;br /&gt;
== Windows ==&lt;br /&gt;
1. Install Cwygin with the instructions [http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_cygwin_ssh.htm#EMBSC152 here].&lt;br /&gt;
ONLY FOLLOW  SECTION 5.3 - Installing Cygwin&lt;br /&gt;
&lt;br /&gt;
2. Move win-matlab.sh [Ask your adminstrator for this file] into C:/cygwin64/usr/local/bin&lt;br /&gt;
&lt;br /&gt;
3. Make sure Matlab is installed in Programs (Default Settings)&lt;br /&gt;
&lt;br /&gt;
4. Open up Cywgin and type [ win-matlab.sh ] and it should ask for your hoffman2 credentials. &lt;br /&gt;
- You will be entering your password several times. So no, you didn&#039;t type it incorrectly. &lt;br /&gt;
&lt;br /&gt;
== Macs/Linux ==&lt;br /&gt;
1. Make sure Matlab is installed in your Applications Folder (Mac OSX) or bin folder (Unix)&lt;br /&gt;
&lt;br /&gt;
2. Open up Terminal and Run matlab.sh (from anywhere).&lt;br /&gt;
- You will be entering your password several times. So don&#039;t worry, you type it correctly the first time.&lt;br /&gt;
&lt;br /&gt;
== Hoffman2 ==&lt;br /&gt;
1. Log onto Hoffman2 using SSH. Please make sure you have GUI X11-window installed. [[Hoffman2:Accessing_the_Cluster | More Information Here]]&lt;br /&gt;
&lt;br /&gt;
2. Type in [ matlab ]. It will ask how long you want your session to last. Enter in however many hours you will use Matlab for. Be warned! This is a hard limit and is ENFORCED strictly. You will be kicked off of matlab when your time runs out.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB_Licenses&amp;diff=2815</id>
		<title>Hoffman2:MATLAB Licenses</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB_Licenses&amp;diff=2815"/>
		<updated>2014-10-31T00:36:36Z</updated>

		<summary type="html">&lt;p&gt;Acho: Created page with &amp;quot;Back to all things Hoffman2  This page describes how to setup pulling licenses from Hoffman2 onto your local machine.  == Windows == 1. Install Cwygin with the in...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
This page describes how to setup pulling licenses from Hoffman2 onto your local machine.&lt;br /&gt;
&lt;br /&gt;
== Windows ==&lt;br /&gt;
1. Install Cwygin with the instructions [http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_cygwin_ssh.htm#EMBSC152 here].&lt;br /&gt;
ONLY FOLLOW  SECTION 5.3 - Installing Cygwin&lt;br /&gt;
&lt;br /&gt;
2. Move win-matlab.sh [Ask your adminstrator for this file] into C:/cygwin64/usr/local/bin&lt;br /&gt;
&lt;br /&gt;
3. Make sure Matlab is installed in Programs (Default Settings)&lt;br /&gt;
&lt;br /&gt;
4. Open up Cywgin and type [ win-matlab.sh ] and it should ask for your hoffman2 credentials. &lt;br /&gt;
- You will be entering your password several times. So no, you didn&#039;t type it incorrectly. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Macs/Linux ==&lt;br /&gt;
1. Make sure Matlab is installed in your Applications Folder (Mac OSX) or bin folder (Unix)&lt;br /&gt;
&lt;br /&gt;
2. Open up Terminal and Run matlab.sh (from anywhere).&lt;br /&gt;
- You will be entering your password several times. So no, you didn&#039;t type it incorrectly.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Quotas&amp;diff=2800</id>
		<title>Hoffman2:Quotas</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Quotas&amp;diff=2800"/>
		<updated>2014-10-14T21:50:01Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Users and groups of users on Hoffman2 only have access to a predefined amount of disk space and number of files.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;After quota is reached, your account (or all the accounts of a usergroup) will have reduced capabilities since you won&#039;t be able to make any new files.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;color: red&amp;quot;&amp;gt;&#039;&#039;&#039;UNDER CONSTRUCTION!!!!!&#039;&#039;&#039;&lt;br /&gt;
This page is under construction. Will have an update soon. SORRY!&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Stay In The Know==&lt;br /&gt;
Keep yourself apprised of how much data you are using with these tools.&lt;br /&gt;
&lt;br /&gt;
===Personal Quotas===&lt;br /&gt;
 myquota&lt;br /&gt;
:Returns information about how much disk space you are using and how many files you have.  An example output is shown below.&lt;br /&gt;
 $ myquota&lt;br /&gt;
 User quotas for eplau (UID 8693) (in GBs):&lt;br /&gt;
 Filesystem            Usage (in GB)          Quota     File Count     File Quota&lt;br /&gt;
 /home/mscohen                   141           5120          58575        8000000&lt;br /&gt;
 Filesystem /home/mscohen usage: 3857 of 5120 GBs (75.3%) and 7024000 of 8000000 files (87.8%)&lt;br /&gt;
 /home/sbook                       1           2048              4        5000000&lt;br /&gt;
 Filesystem /home/sbook usage: 5901 of 6144 GBs (96.1%) and 5881211 of 6000000 files (98.0%)&lt;br /&gt;
&lt;br /&gt;
===Group Quotas===&lt;br /&gt;
 myquota -g [GROUPNAME]&lt;br /&gt;
:Returns information about how much space you and everyone in your resource group are using on Hoffman2. An example output is shown below.&lt;br /&gt;
 $ myquota -g mscohen&lt;br /&gt;
 Group mscohen Report (/home/mscohen):&lt;br /&gt;
 Username  UID    Usage (in GB)          Quota     File Count     File Quota&lt;br /&gt;
 aarontre  9307             164           6144          81477        8000000&lt;br /&gt;
 aburggre  8223               0           6144             10        8000000&lt;br /&gt;
 akshaan   10094              0           6144            120        8000000&lt;br /&gt;
 alenarto  8800             254           6144          97615        8000000&lt;br /&gt;
 alhead    9612             100           6144           1001        8000000&lt;br /&gt;
 ariana    8186             293           6144         420801        8000000&lt;br /&gt;
 ayc       8955             160           6144        1180551        8000000&lt;br /&gt;
 cdrodrig  9545               3           6144             52        8000000&lt;br /&gt;
 dcmoyer   9397             921           6144         173754        8000000&lt;br /&gt;
 diannaha  8134               0           6144            171        8000000&lt;br /&gt;
 eddieyan  10322              0           6144             16        8000000&lt;br /&gt;
 eplau     8693             523           6144         123964        8000000&lt;br /&gt;
 eshwang1  9811               0           6144             54        8000000&lt;br /&gt;
 fbiessma  10212              0           6144             67        8000000&lt;br /&gt;
 fmri      1901               0           6144            479        8000000&lt;br /&gt;
 jbramen   8369             632           6144         462428        8000000&lt;br /&gt;
 jbrown    8187               0           6144            148        8000000&lt;br /&gt;
 jianwen   9921               0           6144             19        8000000&lt;br /&gt;
 kaavyara  9295               0           6144           1064        8000000&lt;br /&gt;
 kerr      8555             487           6144          34095        8000000&lt;br /&gt;
 kesslers  9815              25           6144          60996        8000000&lt;br /&gt;
 mhussien  9922               0           6144             10        8000000&lt;br /&gt;
 mlschaef  9447              11           6144          15783        8000000&lt;br /&gt;
 mowyong   10329              0           6144            201        8000000&lt;br /&gt;
 mscohen   4004              18           6144           8458        8000000&lt;br /&gt;
 mundaeru  9542               0           6144             10        8000000&lt;br /&gt;
 mwollner  9696              35           6144           2721        8000000&lt;br /&gt;
 pamelita  8557            1280           6144         946730        8000000&lt;br /&gt;
 root      0                  0           6144              1        8000000&lt;br /&gt;
 sanjayra  9846               0           6144             10        8000000&lt;br /&gt;
 xiahongj  9047             666           6144         289695        8000000&lt;br /&gt;
 Filesystem mscohen usage: 5656 of 6144 GBs (92.1%) and 3921309 of 8000000 files (49.0%)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Clean Up After Yourself==&lt;br /&gt;
Every so often, some spring cleaning is useful.  We have an app for that.&lt;br /&gt;
=====clean_me.py=====&lt;br /&gt;
Available to everyone in the FMRI usergroup on Hoffman2, this script was designed to find certain types of pesky files that have been known to build up over time but aren&#039;t actually necessary:&lt;br /&gt;
* &#039;&#039;.DS_store&#039;&#039; files&lt;br /&gt;
* Empty directories&lt;br /&gt;
* tsplots&lt;br /&gt;
* Empty files&lt;br /&gt;
* Extended Attributes&lt;br /&gt;
* mat files&lt;br /&gt;
and it gives you the option of deleting them.&lt;br /&gt;
&lt;br /&gt;
To run it, just change into the directory you wish to clean and use the command:&lt;br /&gt;
 $ clean_me.py&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==When it Hits the Fan==&lt;br /&gt;
[[File:Hoffman2-ButtonPress.gif]]&lt;br /&gt;
&lt;br /&gt;
You tried to remember to clean up your files, you even kept a large dataset on your computer last weekend instead of working with it on Hoffman2.  But still your quota, or your group&#039;s quota was reached.  How does one fix this?&lt;br /&gt;
&lt;br /&gt;
# First things first, run &amp;lt;code&amp;gt;clean_me.py&amp;lt;/code&amp;gt;&lt;br /&gt;
# Start identifying what you can delete.  This is a wonderful opportunity to audit your home directory to see what really is in the directory called &#039;&#039;temp-567&#039;&#039; and what you actually put in the file &#039;&#039;subjects-az-list&#039;&#039;.&lt;br /&gt;
# [[Tar Tutorial|Tar]] things up.  That is to say, take that huge collection of DICOM files and turn them into a single file to cut down on overhead (remember it is about both file count and file size on Hoffman2).  Unfamiliar with what tar is?  Check out the [[Tar Tutorial]].&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=2799</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=2799"/>
		<updated>2014-10-14T05:39:03Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://hpc.ucla.edu/hoffman2/hoffman2.php here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12,000 processors across three data centers and this number continues to grow as the cluster is expanded. [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php  Stats.] The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/[u]/[username]&amp;lt;/pre&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin          # Common home directory&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories for specific shared projects&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/project/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Univa Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the UGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The UGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=2798</id>
		<title>Hoffman2:Introduction</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&amp;diff=2798"/>
		<updated>2014-10-14T02:57:35Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
==What is Hoffman2?==&lt;br /&gt;
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003).  It is maintained by the [https://idre.ucla.edu/ IDRE] at UCLA and the main official webage is [http://hpc.ucla.edu/hoffman2/hoffman2.php here].  With many high end processor, data storage, and backup technologies, it is a useful tool for executing research computations especially when working with large datasets.  More than 1000 users are currently registered and the cluster sees tremendous usage.  Click [[Hoffman2:Getting an Account|here]] to find out how to join.  In September 2014 alone, there were more than 5.5 million compute hours logged.  See more usage statistics [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php here].&lt;br /&gt;
&lt;br /&gt;
==Anatomy of the Computing Cluster==&lt;br /&gt;
What does Hoffman2 consist of?&lt;br /&gt;
* Login Nodes&lt;br /&gt;
* Computing Nodes&lt;br /&gt;
* Storage Space&lt;br /&gt;
* Sun Grid Engine (a brain of sorts)&lt;br /&gt;
&lt;br /&gt;
[[File:Hoffman2-layout.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;**Image taken from a previous ATS &amp;quot;Using Hoffman2 Cluster&amp;quot; slide deck and modified for our point.**&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Login Nodes===&lt;br /&gt;
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster.  These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit).  It is important to remember that these are four computers being shared by ALL the Hoffman2 users.  Doing ANY type of heavy computing on these nodes is frowned upon.  If you are:&lt;br /&gt;
*moving lots of files&lt;br /&gt;
*calculating the inverse solution to an EEG signal, or&lt;br /&gt;
*running a bunch of python scripts to extract tractography of a brain&lt;br /&gt;
You should NOT be doing this on a login node.  If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Computing Nodes===&lt;br /&gt;
As of April 2014, Hoffman2 is made up of more than 12000 processors across three data centers and this number continues to grow as the cluster is expanded. [http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php  Stats.] The individual cores of the processors are where your programs gets executed when you submit a job to the cluster.  There are ways to request different amount of resources, such as how much RAM or CPU cores your program/job needs.&lt;br /&gt;
&lt;br /&gt;
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  For more information, go [http://hpc.ucla.edu/hoffman2/computing/gpuq.php here] &lt;br /&gt;
&lt;br /&gt;
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster.  Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://hpc.ucla.edu/hoffman2/computing/policies.php#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:&lt;br /&gt;
* 6 nodes (installed pre 2010) each with&lt;br /&gt;
** 8 cores&lt;br /&gt;
** 8GB RAM&lt;br /&gt;
* 3 nodes (installed Fall 2012) each with&lt;br /&gt;
** 16 cores&lt;br /&gt;
** 48GB RAM&lt;br /&gt;
Use the command &amp;lt;code&amp;gt;mygroup&amp;lt;/code&amp;gt; to see what resources you have available.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Storage Space===&lt;br /&gt;
For official and up-to-date information about storage space, [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php click here].  If you want a quick overview, see below.&lt;br /&gt;
&lt;br /&gt;
====Long Term Storage====&lt;br /&gt;
IDRE maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space.  There have built in redundancies and are fault tolerant. Redundant Backups are also available.&lt;br /&gt;
&lt;br /&gt;
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and IDRE takes great pains to make sure your data is safe. If you are paranoid, there is alternative [ backups ].&lt;br /&gt;
&lt;br /&gt;
=====Home Directories=====&lt;br /&gt;
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/[u]/[username]&amp;lt;/code&amp;gt;&lt;br /&gt;
:Where &amp;lt;code&amp;gt;[u]&amp;lt;/code&amp;gt; is the first letter of the username, e.g.&lt;br /&gt;
::&amp;lt;code&amp;gt;/u/home/j/jbruin&amp;lt;/code&amp;gt;&lt;br /&gt;
::&amp;lt;code&amp;gt; /u/home/t/ttrojan&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files).  &#039;&#039;&#039;It is not the place for your large datasets for computing.&#039;&#039;&#039;  Data in your home directory is accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group&#039;s common space described in the next section...&lt;br /&gt;
&lt;br /&gt;
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]&lt;br /&gt;
&lt;br /&gt;
=====Group Directory=====&lt;br /&gt;
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013).  This is common space designed for collaboration and is where your datasets should mainly be stored.  Individual users are given directories under the main group directory to help organize data ownership.  For example:&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/mscohen           # Common group directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/mscohen/data      # Common group &amp;quot;data&amp;quot; directory, create subdirectories within this for specific projects or uses&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/mscohen/aaronab   # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
::&amp;lt;pre&amp;gt;/u/home/mscohen/kerr      # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:and these directories are accessible from all login and computing nodes.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;These directories have limits to how many files can be put in them and how large those files can be.&#039;&#039;&#039;&lt;br /&gt;
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 1TB worth of files, OR&lt;br /&gt;
:** 1 million files&lt;br /&gt;
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER&lt;br /&gt;
:** 4TB worth of files, OR&lt;br /&gt;
:** 4 million files&lt;br /&gt;
:&#039;&#039;&#039;Once a group&#039;s quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.&#039;&#039;&#039; This means any computing jobs you are running may fail due to an inability to write out their results.  You may also have trouble starting GUI sessions due to an inability to create temporary files.&lt;br /&gt;
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Temporary Storage====&lt;br /&gt;
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow.  So faster temporary storage is available to use for ongoing jobs.  Read the official description [http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php here].&lt;br /&gt;
&lt;br /&gt;
=====work=====&lt;br /&gt;
:&#039;&#039;&#039;/work&#039;&#039;&#039;&lt;br /&gt;
:Each computing node has its own unique &amp;quot;work&amp;quot; directory.  This is only accessible by jobs on that specific node.  &#039;&#039;&#039;Files in /work more than 24 hours old become eligible for automatic deletion.&#039;&#039;&#039; There is at least 200GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).&lt;br /&gt;
&lt;br /&gt;
:Every job is given a unique subdirectory on &#039;&#039;work&#039;&#039; where it can read and write files rapidly.  The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; points to this directory.&lt;br /&gt;
&lt;br /&gt;
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at job completion so it is not deleted.&lt;br /&gt;
&lt;br /&gt;
=====scratch=====&lt;br /&gt;
:&#039;&#039;&#039;/u/scratch/[u]/[username]&#039;&#039;&#039;&lt;br /&gt;
:Where &#039;&#039;[username]&#039;&#039; is replaced with your Hoffman2 username and &#039;&#039;[u]&#039;&#039; is replaced with the first letter of your username.  Data here is accessible on all login and computing nodes.  You can use up to 2TB of space here, but &#039;&#039;&#039;Any files older than 7 days may be automatically deleted by the system&#039;&#039;&#039;. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] &amp;lt;code&amp;gt;$SCRATCH&amp;lt;/code&amp;gt; to reliably access your personal scratch directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Cold Storage====&lt;br /&gt;
Cold storage is the idea of storing things away for a long time. Disk space on hoffman2 can be very expensive.&lt;br /&gt;
IDRE has a cloud archival storage service [http://www.cass.idre.ucla.edu/ here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Sun Grid Engine===&lt;br /&gt;
The Sun Grid Engine is the brains behind how jobs get executed on the cluster.  When you request that a script be run on Hoffman2, the SGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements.  Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up.  The SGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.&lt;br /&gt;
&lt;br /&gt;
====Queues====&lt;br /&gt;
There is more than one queue on Hoffman2.  Each is for a slightly different purpose:&lt;br /&gt;
; express&lt;br /&gt;
: For jobs requesting at most 2 hours of computing time.&lt;br /&gt;
; interactive&lt;br /&gt;
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.&lt;br /&gt;
; highp&lt;br /&gt;
: For jobs requesting at most 14 days of computing time.  These are required to run on nodes owned by your group.&lt;br /&gt;
And there are others.  Read about them [http://hpc.ucla.edu/hoffman2/computing/computing.php here].&lt;br /&gt;
&lt;br /&gt;
Find out how to submit computing jobs to the [[Hoffman2:Submitting Jobs| Hoffman2 Cluster.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links / Notes==&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/hoffman2.php Hoffman2 Webpage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/h2stat/h2stat.php Hoffman2 Statistics]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/data-storage/data-storage.php Hoffman2 Data Storage]&lt;br /&gt;
*[http://hpc.ucla.edu/hoffman2/computing/computing.php Hoffman2 Computing]&lt;br /&gt;
*[[Hoffman2:Introduction-Historical_Notes | Historical Notes]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&amp;diff=2786</id>
		<title>Hoffman2</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&amp;diff=2786"/>
		<updated>2014-10-11T01:44:29Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A compilation of lab know-how regarding the Hoffman2 Computing Cluster.&lt;br /&gt;
&lt;br /&gt;
Anyone new to the lab and using Hoffman2 NEEDS to read the first section to have adequate working knowledge of the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
Hoffman2 is a Computing Cluster at UCLA, find out how it generally works so you know how to use it.&lt;br /&gt;
: [[Hoffman2:Introduction]]&lt;br /&gt;
&lt;br /&gt;
=== Getting an Account ===&lt;br /&gt;
You know what it is, now you want to use it. First you need an account.&lt;br /&gt;
: [[Hoffman2:Getting an Account]]&lt;br /&gt;
&lt;br /&gt;
=== Accessing the Cluster ===&lt;br /&gt;
Now how do you use that account to access the cluster?&lt;br /&gt;
: [[Hoffman2:Accessing the Cluster]]&lt;br /&gt;
&lt;br /&gt;
=== Working in a UNIX Environment ===&lt;br /&gt;
Never heard of a command line before? Vaguely know what &amp;quot;permissions&amp;quot; are and have no idea how to navigate a filesystem? This page is meant to take the scare out of the words &amp;quot;command line&amp;quot; so you can actually use Hoffman2, because no matter how many GUIs there are, command line is king.&lt;br /&gt;
: [[Hoffman2:UNIX Tutorial]]&lt;br /&gt;
&lt;br /&gt;
=== Quotas ===&lt;br /&gt;
Resources are not infinite, and disk space is a resource. Find out how to manage your disk space usage to stay under quota.&lt;br /&gt;
: [[Hoffman2:Quotas]]&lt;br /&gt;
&lt;br /&gt;
=== Profile ===&lt;br /&gt;
You have an account, know how to get there, and now you need to make one last step for you account to be fully usable. Time to get access to all the fun computation tools. &lt;br /&gt;
: [[Hoffman2:Profile]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Computing ==&lt;br /&gt;
You can find your way through Hoffman2, now it is time to start making things happen.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
You&#039;ve got your account, you are logged on, now how do you get to using a real software tool?&lt;br /&gt;
: [[Hoffman2:Software Tools]]&lt;br /&gt;
&lt;br /&gt;
=== Submitting Jobs ===&lt;br /&gt;
Now you have the tools, but how do you ask Hoffman2 to run them for you as a job? Since you aren&#039;t supposed to be running them on a login node...&lt;br /&gt;
: [[Hoffman2:Submitting Jobs]]&lt;br /&gt;
&lt;br /&gt;
=== Monitoring Jobs ===&lt;br /&gt;
Right after they zap their robot monster to life, every mad scientist wishes they had the tools to check on or stop their creation. Now that you can submit jobs, you need to be able to check on them and stop them if they start terrorizing downtown Tokyo.&lt;br /&gt;
: [[Hoffman2:Monitoring Jobs]]&lt;br /&gt;
&lt;br /&gt;
=== Interactive Sessions ===&lt;br /&gt;
Some software tools need you to interact with them while they work. Other times you just need to be able to run your script over and over while you work to eradicate all of its bugs. Enter &#039;&#039;Interactive&#039;&#039; Sessions.&lt;br /&gt;
: [[Hoffman2:Interactive Sessions]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
=== MATLAB ===&lt;br /&gt;
How to use MATLAB on the cluster. It is easier than you think. (Well, maybe...)&lt;br /&gt;
: [[Hoffman2:MATLAB]]&lt;br /&gt;
&lt;br /&gt;
==== Compiling MATLAB ====&lt;br /&gt;
So you have a MATLAB script, but you don&#039;t need to GUI open all night to have it process your data. How to submit MATLAB jobs to Hoffman2.&lt;br /&gt;
: [[Hoffman2:Compiling MATLAB]]&lt;br /&gt;
&lt;br /&gt;
==== EEGLAB ====&lt;br /&gt;
We try to maintain the three most recent versions of EEGLAB for your convenience. Make sure to add it to your MATLAB path.&lt;br /&gt;
: [[Hoffman2:MATLAB:EEGLAB]]&lt;br /&gt;
&lt;br /&gt;
===== EEGLAB Jobs =====&lt;br /&gt;
Processing multiple subjects through EEGLAB can be tiring and inconvenient if you do it by hand.  Learn how to make scripts that run as jobs leveraging the power of Hoffman2.&lt;br /&gt;
: [[Hoffman2:MATLAB:EEGLAB:Jobs]]&lt;br /&gt;
&lt;br /&gt;
==== SPM Compiled (Batch) ====&lt;br /&gt;
Maybe FSL isn&#039;t your cup of tea for neuroimaging work.  SPM is a capable alternative and, even though it is MATLAB based, it has a compiled version that will let you leverage the power of the cluster.&lt;br /&gt;
: [[Hoffman2:MATLAB:SPM]]&lt;br /&gt;
&lt;br /&gt;
=== R ===&lt;br /&gt;
You are probably a statistician, or you just prefer open source software. Here&#039;s how to run R on Hoffman2.&lt;br /&gt;
: [[Hoffman2:R]]&lt;br /&gt;
&lt;br /&gt;
=== WEKA ===&lt;br /&gt;
If machine learning is your thing, maybe you&#039;ve heard of WEKA. If not, maybe it will be your new best friend.&lt;br /&gt;
: [[Hoffman2:WEKA]]&lt;br /&gt;
&lt;br /&gt;
=== FSL ===&lt;br /&gt;
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.&lt;br /&gt;
: [[Hoffman2:FSL]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Productivity ==&lt;br /&gt;
How about streamlining some of those tasks, or getting more things done.&lt;br /&gt;
&lt;br /&gt;
=== Scripts ===&lt;br /&gt;
All of the difficulties you are experiencing now have probably been experienced before by someone else. And for that reason we already have scripts to simplify your life.&lt;br /&gt;
: [[Hoffman2:Scripts]]&lt;br /&gt;
&lt;br /&gt;
=== Data Transfer ===&lt;br /&gt;
All dressed up with no where to go? That&#039;s how Hoffman2 feels if you don&#039;t give it any data to work with. Find out how to avoid hurting the Cluster&#039;s feelings.&lt;br /&gt;
: [[Hoffman2:Data Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Sharing Filesystems ===&lt;br /&gt;
All you want to do is be able to look at your precious data. But it is locked up on Hoffman2 and you want to use tools on your computer to look at it. There&#039;s an app for that.&lt;br /&gt;
: [[Hoffman2:Sharing Filesystems]]&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
Simple tools that will help your productivity.&lt;br /&gt;
: [[Hoffman2:Tools]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Wesley&#039;s Usage, so you can plan around it and ask him to stop beating the cluster up.&lt;br /&gt;
: [[Hoffman2:WTK Usage]]&lt;br /&gt;
&lt;br /&gt;
Delete/Old Hoffman2 Pages&lt;br /&gt;
: [[Hoffman2:Archive]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Archive&amp;diff=2785</id>
		<title>Hoffman2:Archive</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Archive&amp;diff=2785"/>
		<updated>2014-10-11T01:43:14Z</updated>

		<summary type="html">&lt;p&gt;Acho: Created page with &amp;quot;Back to all things Hoffman2  Old Stuff. You must be pretty desperate to be here.    === LONI Pipeline === A Workflow application to make things easier. : Hoffma...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Old Stuff. You must be pretty desperate to be here.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== LONI Pipeline ===&lt;br /&gt;
A Workflow application to make things easier.&lt;br /&gt;
: [[Hoffman2:LONI]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&amp;diff=2784</id>
		<title>Hoffman2</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&amp;diff=2784"/>
		<updated>2014-10-11T01:38:39Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A compilation of lab know-how regarding the Hoffman2 Computing Cluster.&lt;br /&gt;
&lt;br /&gt;
Anyone new to the lab and using Hoffman2 NEEDS to read the first section to have adequate working knowledge of the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
Hoffman2 is a Computing Cluster at UCLA, find out how it generally works so you know how to use it.&lt;br /&gt;
: [[Hoffman2:Introduction]]&lt;br /&gt;
&lt;br /&gt;
=== Getting an Account ===&lt;br /&gt;
You know what it is, now you want to use it. First you need an account.&lt;br /&gt;
: [[Hoffman2:Getting an Account]]&lt;br /&gt;
&lt;br /&gt;
=== Accessing the Cluster ===&lt;br /&gt;
Now how do you use that account to access the cluster?&lt;br /&gt;
: [[Hoffman2:Accessing the Cluster]]&lt;br /&gt;
&lt;br /&gt;
=== Working in a UNIX Environment ===&lt;br /&gt;
Never heard of a command line before? Vaguely know what &amp;quot;permissions&amp;quot; are and have no idea how to navigate a filesystem? This page is meant to take the scare out of the words &amp;quot;command line&amp;quot; so you can actually use Hoffman2, because no matter how many GUIs there are, command line is king.&lt;br /&gt;
: [[Hoffman2:UNIX Tutorial]]&lt;br /&gt;
&lt;br /&gt;
=== Quotas ===&lt;br /&gt;
Resources are not infinite, and disk space is a resource. Find out how to manage your disk space usage to stay under quota.&lt;br /&gt;
: [[Hoffman2:Quotas]]&lt;br /&gt;
&lt;br /&gt;
=== Profile ===&lt;br /&gt;
You have an account, know how to get there, and now you need to make one last step for you account to be fully usable. Time to get access to all the fun computation tools. &lt;br /&gt;
: [[Hoffman2:Profile]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Computing ==&lt;br /&gt;
You can find your way through Hoffman2, now it is time to start making things happen.&lt;br /&gt;
&lt;br /&gt;
=== Software Tools ===&lt;br /&gt;
You&#039;ve got your account, you are logged on, now how do you get to using a real software tool?&lt;br /&gt;
: [[Hoffman2:Software Tools]]&lt;br /&gt;
&lt;br /&gt;
=== Submitting Jobs ===&lt;br /&gt;
Now you have the tools, but how do you ask Hoffman2 to run them for you as a job? Since you aren&#039;t supposed to be running them on a login node...&lt;br /&gt;
: [[Hoffman2:Submitting Jobs]]&lt;br /&gt;
&lt;br /&gt;
=== Monitoring Jobs ===&lt;br /&gt;
Right after they zap their robot monster to life, every mad scientist wishes they had the tools to check on or stop their creation. Now that you can submit jobs, you need to be able to check on them and stop them if they start terrorizing downtown Tokyo.&lt;br /&gt;
: [[Hoffman2:Monitoring Jobs]]&lt;br /&gt;
&lt;br /&gt;
=== Interactive Sessions ===&lt;br /&gt;
Some software tools need you to interact with them while they work. Other times you just need to be able to run your script over and over while you work to eradicate all of its bugs. Enter &#039;&#039;Interactive&#039;&#039; Sessions.&lt;br /&gt;
: [[Hoffman2:Interactive Sessions]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
=== MATLAB ===&lt;br /&gt;
How to use MATLAB on the cluster. It is easier than you think. (Well, maybe...)&lt;br /&gt;
: [[Hoffman2:MATLAB]]&lt;br /&gt;
&lt;br /&gt;
==== Compiling MATLAB ====&lt;br /&gt;
So you have a MATLAB script, but you don&#039;t need to GUI open all night to have it process your data. How to submit MATLAB jobs to Hoffman2.&lt;br /&gt;
: [[Hoffman2:Compiling MATLAB]]&lt;br /&gt;
&lt;br /&gt;
==== EEGLAB ====&lt;br /&gt;
We try to maintain the three most recent versions of EEGLAB for your convenience. Make sure to add it to your MATLAB path.&lt;br /&gt;
: [[Hoffman2:MATLAB:EEGLAB]]&lt;br /&gt;
&lt;br /&gt;
===== EEGLAB Jobs =====&lt;br /&gt;
Processing multiple subjects through EEGLAB can be tiring and inconvenient if you do it by hand.  Learn how to make scripts that run as jobs leveraging the power of Hoffman2.&lt;br /&gt;
: [[Hoffman2:MATLAB:EEGLAB:Jobs]]&lt;br /&gt;
&lt;br /&gt;
==== SPM Compiled (Batch) ====&lt;br /&gt;
Maybe FSL isn&#039;t your cup of tea for neuroimaging work.  SPM is a capable alternative and, even though it is MATLAB based, it has a compiled version that will let you leverage the power of the cluster.&lt;br /&gt;
: [[Hoffman2:MATLAB:SPM]]&lt;br /&gt;
&lt;br /&gt;
=== R ===&lt;br /&gt;
You are probably a statistician, or you just prefer open source software. Here&#039;s how to run R on Hoffman2.&lt;br /&gt;
: [[Hoffman2:R]]&lt;br /&gt;
&lt;br /&gt;
=== WEKA ===&lt;br /&gt;
If machine learning is your thing, maybe you&#039;ve heard of WEKA. If not, maybe it will be your new best friend.&lt;br /&gt;
: [[Hoffman2:WEKA]]&lt;br /&gt;
&lt;br /&gt;
=== FSL ===&lt;br /&gt;
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.&lt;br /&gt;
: [[Hoffman2:FSL]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Productivity ==&lt;br /&gt;
How about streamlining some of those tasks, or getting more things done.&lt;br /&gt;
&lt;br /&gt;
=== Scripts ===&lt;br /&gt;
All of the difficulties you are experiencing now have probably been experienced before by someone else. And for that reason we already have scripts to simplify your life.&lt;br /&gt;
: [[Hoffman2:Scripts]]&lt;br /&gt;
&lt;br /&gt;
=== Data Transfer ===&lt;br /&gt;
All dressed up with no where to go? That&#039;s how Hoffman2 feels if you don&#039;t give it any data to work with. Find out how to avoid hurting the Cluster&#039;s feelings.&lt;br /&gt;
: [[Hoffman2:Data Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Sharing Filesystems ===&lt;br /&gt;
All you want to do is be able to look at your precious data. But it is locked up on Hoffman2 and you want to use tools on your computer to look at it. There&#039;s an app for that.&lt;br /&gt;
: [[Hoffman2:Sharing Filesystems]]&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
Simple tools that will help your productivity.&lt;br /&gt;
: [[Hoffman2:Tools]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Wesley&#039;s Usage, so you can plan around it and ask him to stop beating the cluster up.&lt;br /&gt;
: [[Hoffman2:WTK Usage]]&lt;br /&gt;
&lt;br /&gt;
Delete/Old Hoffman2 Pages&lt;br /&gt;
: [[Hoffman2:Archive]]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=File:MathematicalTools2014.pdf&amp;diff=2769</id>
		<title>File:MathematicalTools2014.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=File:MathematicalTools2014.pdf&amp;diff=2769"/>
		<updated>2014-10-06T07:17:00Z</updated>

		<summary type="html">&lt;p&gt;Acho: Acho uploaded a new version of &amp;amp;quot;File:MathematicalTools2014.pdf&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=File:MathematicalTools2014.pdf&amp;diff=2768</id>
		<title>File:MathematicalTools2014.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=File:MathematicalTools2014.pdf&amp;diff=2768"/>
		<updated>2014-10-06T07:14:28Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Interactive_Sessions&amp;diff=2747</id>
		<title>Hoffman2:Interactive Sessions</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Interactive_Sessions&amp;diff=2747"/>
		<updated>2014-09-18T20:04:24Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
Interactive sessions on Hoffman2 let you have access to a computing node for up to 24 hours.  This is ideal for:&lt;br /&gt;
* running a intensive program like MATLAB (in fact that&#039;s how it [[Hoffman2:MATLAB|works]]), [[Hoffman2:WEKA|WEKA]], [[Hoffman2:R|R]] or FSLView&lt;br /&gt;
* debugging a script you will be submitting to the queue later&lt;br /&gt;
* moving/tar&#039;ing/untar&#039;ing lots of files&lt;br /&gt;
* any other computing or graphics intensive operations&lt;br /&gt;
since you aren&#039;t supposed to use the login nodes for such heavy lifting.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Basic Command==&lt;br /&gt;
To get one, you need to use the &amp;lt;code&amp;gt;qrsh&amp;lt;/code&amp;gt; command with the &amp;lt;code&amp;gt;-l i&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
For example&lt;br /&gt;
 $ qrsh -l i&lt;br /&gt;
will try to get you an interactive node.  That dash-elle flag followed by the &amp;quot;i&amp;quot; is specifying that you want an interactive resource.&lt;br /&gt;
&lt;br /&gt;
Because you didn&#039;t specify a time limit this session will only last two hours after which you will be kicked off of the interactive node back to a login node.&lt;br /&gt;
&lt;br /&gt;
And because you didn&#039;t specify a memory limit, as of September 2013 job memory enforcement is strict so you will be kicked off if you cross ATS&#039;s default memory limit.  As of 2013.09.09 this default was 1GB.&lt;br /&gt;
&lt;br /&gt;
And if all the interactive nodes are busy (there are only so many of them), then you will be told it was unable to secure one for you.&lt;br /&gt;
&lt;br /&gt;
If you successfully get a node, your prompt will change from something like&lt;br /&gt;
 [joebruin@login4 ~] $&lt;br /&gt;
to something like&lt;br /&gt;
 [joebruin@n1234 ~] $&lt;br /&gt;
indicating you are on node 1234.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Longer Time==&lt;br /&gt;
If you wanted to specify a a time limit for your interactive session (anything less than 24 hours), use the resource flag again and specify time in the HH:MM:SS format.&lt;br /&gt;
&lt;br /&gt;
For example&lt;br /&gt;
 $ qrsh -l i,h_rt=4:00:00&lt;br /&gt;
will try securing an interactive node for four hours with the default amount of RAM, but if they are all taken you will be kindly told you are out of luck.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==More Memory==&lt;br /&gt;
Doing something memory intensive? Like working with a lot of visualizations or multiple datasets? Use the resource flag again and specify a data request.&lt;br /&gt;
&lt;br /&gt;
For example&lt;br /&gt;
 $ qrsh -l i,h_rt=4:00:00,h_data=4G&lt;br /&gt;
will try securing an interactive node for four hours with four gigabytes of RAM, but if no such node is available the cluster will deny your request.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Now!==&lt;br /&gt;
If you absolutely need an interactive session now and can&#039;t take no for an answer, use a special flag &amp;lt;code&amp;gt;-now no&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For example&lt;br /&gt;
 $ qrsh -l i,h_rt=4:00:00 -now no&lt;br /&gt;
will try securing an interactive node for four hours with the default amount of RAM.  But if all of the interactive nodes are used up, it will put you in a queue waiting for one until you get it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Tips==&lt;br /&gt;
Sometimes inactivity on your computer will result in Hoffman2 connection break [ Broken Pipe ] (even while computing).&lt;br /&gt;
&lt;br /&gt;
To prevent this from happening:&lt;br /&gt;
For Macs - in your /etc/ssh_config -add this line to the bottom &lt;br /&gt;
 ServerAliveInterval 180&lt;br /&gt;
&lt;br /&gt;
This will tell your ssh to ping the server every 180 seconds to prevent it from timing out.&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/sge_qrsh.htm Getting an interactive node]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2622</id>
		<title>Hoffman2:Using Globus</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2622"/>
		<updated>2014-05-25T07:34:55Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Transfer - Cluster to CASS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Data_Transfer|Back to Hoffman2:Data_Transfer]]&lt;br /&gt;
&lt;br /&gt;
This page contains many information on how to use Globus to move around files.&lt;br /&gt;
&lt;br /&gt;
==Globus Connect Software (Local Desktop)==&lt;br /&gt;
If you want to transfer files to or from your local desktop machine, you need to download and install the Globus Connect software (one-time). You will need to do this step on each of your desktop machines whose files you want to transfer using Globus.&lt;br /&gt;
&lt;br /&gt;
#Point your browser at http://www.globus.org and click Globus Connect. You will see a popup window in a web page. If you don’t see the popup window, click the Get Globus Connect link on that page.&lt;br /&gt;
#In Step One, click the button corresponding to your local platform (Mac OS, Linux or Windows) to download to your local desktop machine.&lt;br /&gt;
#In Step Two, enter an Endpoint Name to identify your local machine in the Endpoint Name field, (you can ignore the Description field), and click Generate Setup Key.&lt;br /&gt;
#Copy the setup key to some file and save it, since you will not see it again.&lt;br /&gt;
#Install the downloaded Globus Connect software on your local desktop.&lt;br /&gt;
#Run Globus Connect Installation (Windows: globus_connect_install.exe) for the initial setup, and paste in the setup key when prompted.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Adding a CASS Endpoint==&lt;br /&gt;
If you have a UCLA cloud archival storage system, you need to go through this process before moving files between your CASS and other storage systems.&lt;br /&gt;
&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#On the Transfer Files Page, Click on Manage Endpoints (on the top right).&lt;br /&gt;
#Click on Add an Endpoint (a drop down menu should appear)&lt;br /&gt;
#Fill out the Information&lt;br /&gt;
#*Basics&lt;br /&gt;
#**Endpoint Name (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Description = &#039;&#039;&#039;[Any Description Will Do - Ex. MSCOHEN-CASS]&#039;&#039;&#039;&lt;br /&gt;
#**Visible To: &#039;&#039;&#039;Private - Visible only to you&#039;&#039;&#039;&lt;br /&gt;
#**Default Directory: &#039;&#039;&#039;/~/&#039;&#039;&#039;&lt;br /&gt;
#*Identity Providers&lt;br /&gt;
#**(In the dropdown menu): &#039;&#039;&#039;MyProxy OAuth&#039;&#039;&#039;&lt;br /&gt;
#*Servers&lt;br /&gt;
#**Server Type: &#039;&#039;&#039;GridFTP&#039;&#039;&#039;&lt;br /&gt;
#**Server Domain (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Server Port: &#039;&#039;&#039;2811&#039;&#039;&#039;&lt;br /&gt;
#**Subject DN: &#039;&#039;&#039;[Your Subject DN - Ex. /C=US/O=Globus Consortium/OU=Globus Connect Service/CN=1234567...]&#039;&#039;&#039;&lt;br /&gt;
# Click Create Endpoint.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Cluster_to_CASS|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Transfer - Local to Cluster==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your local computer&lt;br /&gt;
&lt;br /&gt;
#If you want to transfer files to or from your local desktop machine, start Globus Connect (Windows: gc.exe, Mac: Applications/Globus) on your local machine. A small status window will appear. When the connection has been made, the dot at the left of the connection will turn green. (MAKE SURE THE KEY IS ADDED - [[Hoffman2:Using_Globus#Globus_Connect_Software|Instructions Above.]]&lt;br /&gt;
#Point your browser at http://www.globus.org, click Sign In, and click Transfer Files. A web page with two Endpoint fields will display.&lt;br /&gt;
#In one Endpoint field, pull down the expand menu and select a site. If you are running Globus Connect, the first name in the list is your local desktop machine. When you select your local desktop machine, a list of your home directory files and directories will be displayed.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#When you select hoffman2#ucla a popup window will ask you for your myproxy server username and passphrase. Enter your Hoffman2 username and its password. Leave the Server DN field as it is. The default Lifetime value is 12 hours. If you are transferring a large amount of data, you may need to increase the Lifetime value. Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 Cluster home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.&lt;br /&gt;
You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;br /&gt;
&lt;br /&gt;
==Transfer - Cluster to CASS==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your Cloud Archival Storage Service&lt;br /&gt;
#Make Sure You have setup the CASS Endpoint. If you have not, go to the [[Hoffman2:Using_Globus#Adding_a_CASS_Endpoint|section above.]]&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#In one Endpoint field, type in &#039;&#039;&#039;cass.idre.ucla.edu&#039;&#039;&#039; and you should see your &#039;&#039;&#039;[username#groupname].cass.idre.ucla.edu&#039;&#039;&#039; drop down - click on it.&lt;br /&gt;
#Authenticate with your CASS credentials and  Click the Authenticate button.&lt;br /&gt;
#Your CASS home directory and its contents will display.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#Authenticate with your Hoffman2 credentials and  Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2621</id>
		<title>Hoffman2:Using Globus</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2621"/>
		<updated>2014-05-25T07:34:35Z</updated>

		<summary type="html">&lt;p&gt;Acho: /* Transfer - Cluster to CASS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Data_Transfer|Back to Hoffman2:Data_Transfer]]&lt;br /&gt;
&lt;br /&gt;
This page contains many information on how to use Globus to move around files.&lt;br /&gt;
&lt;br /&gt;
==Globus Connect Software (Local Desktop)==&lt;br /&gt;
If you want to transfer files to or from your local desktop machine, you need to download and install the Globus Connect software (one-time). You will need to do this step on each of your desktop machines whose files you want to transfer using Globus.&lt;br /&gt;
&lt;br /&gt;
#Point your browser at http://www.globus.org and click Globus Connect. You will see a popup window in a web page. If you don’t see the popup window, click the Get Globus Connect link on that page.&lt;br /&gt;
#In Step One, click the button corresponding to your local platform (Mac OS, Linux or Windows) to download to your local desktop machine.&lt;br /&gt;
#In Step Two, enter an Endpoint Name to identify your local machine in the Endpoint Name field, (you can ignore the Description field), and click Generate Setup Key.&lt;br /&gt;
#Copy the setup key to some file and save it, since you will not see it again.&lt;br /&gt;
#Install the downloaded Globus Connect software on your local desktop.&lt;br /&gt;
#Run Globus Connect Installation (Windows: globus_connect_install.exe) for the initial setup, and paste in the setup key when prompted.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Adding a CASS Endpoint==&lt;br /&gt;
If you have a UCLA cloud archival storage system, you need to go through this process before moving files between your CASS and other storage systems.&lt;br /&gt;
&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#On the Transfer Files Page, Click on Manage Endpoints (on the top right).&lt;br /&gt;
#Click on Add an Endpoint (a drop down menu should appear)&lt;br /&gt;
#Fill out the Information&lt;br /&gt;
#*Basics&lt;br /&gt;
#**Endpoint Name (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Description = &#039;&#039;&#039;[Any Description Will Do - Ex. MSCOHEN-CASS]&#039;&#039;&#039;&lt;br /&gt;
#**Visible To: &#039;&#039;&#039;Private - Visible only to you&#039;&#039;&#039;&lt;br /&gt;
#**Default Directory: &#039;&#039;&#039;/~/&#039;&#039;&#039;&lt;br /&gt;
#*Identity Providers&lt;br /&gt;
#**(In the dropdown menu): &#039;&#039;&#039;MyProxy OAuth&#039;&#039;&#039;&lt;br /&gt;
#*Servers&lt;br /&gt;
#**Server Type: &#039;&#039;&#039;GridFTP&#039;&#039;&#039;&lt;br /&gt;
#**Server Domain (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Server Port: &#039;&#039;&#039;2811&#039;&#039;&#039;&lt;br /&gt;
#**Subject DN: &#039;&#039;&#039;[Your Subject DN - Ex. /C=US/O=Globus Consortium/OU=Globus Connect Service/CN=1234567...]&#039;&#039;&#039;&lt;br /&gt;
# Click Create Endpoint.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Cluster_to_CASS|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Transfer - Local to Cluster==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your local computer&lt;br /&gt;
&lt;br /&gt;
#If you want to transfer files to or from your local desktop machine, start Globus Connect (Windows: gc.exe, Mac: Applications/Globus) on your local machine. A small status window will appear. When the connection has been made, the dot at the left of the connection will turn green. (MAKE SURE THE KEY IS ADDED - [[Hoffman2:Using_Globus#Globus_Connect_Software|Instructions Above.]]&lt;br /&gt;
#Point your browser at http://www.globus.org, click Sign In, and click Transfer Files. A web page with two Endpoint fields will display.&lt;br /&gt;
#In one Endpoint field, pull down the expand menu and select a site. If you are running Globus Connect, the first name in the list is your local desktop machine. When you select your local desktop machine, a list of your home directory files and directories will be displayed.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#When you select hoffman2#ucla a popup window will ask you for your myproxy server username and passphrase. Enter your Hoffman2 username and its password. Leave the Server DN field as it is. The default Lifetime value is 12 hours. If you are transferring a large amount of data, you may need to increase the Lifetime value. Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 Cluster home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.&lt;br /&gt;
You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;br /&gt;
&lt;br /&gt;
==Transfer - Cluster to CASS==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your Cloud Archival Storage Service&lt;br /&gt;
#Make Sure You have setup the CASS Endpoint. If you have not, go to the [[Hoffman2:Using_Globus#Adding_CASS_Endpoint|section above.]]&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#In one Endpoint field, type in &#039;&#039;&#039;cass.idre.ucla.edu&#039;&#039;&#039; and you should see your &#039;&#039;&#039;[username#groupname].cass.idre.ucla.edu&#039;&#039;&#039; drop down - click on it.&lt;br /&gt;
#Authenticate with your CASS credentials and  Click the Authenticate button.&lt;br /&gt;
#Your CASS home directory and its contents will display.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#Authenticate with your Hoffman2 credentials and  Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2620</id>
		<title>Hoffman2:Using Globus</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2620"/>
		<updated>2014-05-25T07:34:11Z</updated>

		<summary type="html">&lt;p&gt;Acho: Undo revision 2619 by Acho (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Data_Transfer|Back to Hoffman2:Data_Transfer]]&lt;br /&gt;
&lt;br /&gt;
This page contains many information on how to use Globus to move around files.&lt;br /&gt;
&lt;br /&gt;
==Globus Connect Software (Local Desktop)==&lt;br /&gt;
If you want to transfer files to or from your local desktop machine, you need to download and install the Globus Connect software (one-time). You will need to do this step on each of your desktop machines whose files you want to transfer using Globus.&lt;br /&gt;
&lt;br /&gt;
#Point your browser at http://www.globus.org and click Globus Connect. You will see a popup window in a web page. If you don’t see the popup window, click the Get Globus Connect link on that page.&lt;br /&gt;
#In Step One, click the button corresponding to your local platform (Mac OS, Linux or Windows) to download to your local desktop machine.&lt;br /&gt;
#In Step Two, enter an Endpoint Name to identify your local machine in the Endpoint Name field, (you can ignore the Description field), and click Generate Setup Key.&lt;br /&gt;
#Copy the setup key to some file and save it, since you will not see it again.&lt;br /&gt;
#Install the downloaded Globus Connect software on your local desktop.&lt;br /&gt;
#Run Globus Connect Installation (Windows: globus_connect_install.exe) for the initial setup, and paste in the setup key when prompted.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Adding a CASS Endpoint==&lt;br /&gt;
If you have a UCLA cloud archival storage system, you need to go through this process before moving files between your CASS and other storage systems.&lt;br /&gt;
&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#On the Transfer Files Page, Click on Manage Endpoints (on the top right).&lt;br /&gt;
#Click on Add an Endpoint (a drop down menu should appear)&lt;br /&gt;
#Fill out the Information&lt;br /&gt;
#*Basics&lt;br /&gt;
#**Endpoint Name (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Description = &#039;&#039;&#039;[Any Description Will Do - Ex. MSCOHEN-CASS]&#039;&#039;&#039;&lt;br /&gt;
#**Visible To: &#039;&#039;&#039;Private - Visible only to you&#039;&#039;&#039;&lt;br /&gt;
#**Default Directory: &#039;&#039;&#039;/~/&#039;&#039;&#039;&lt;br /&gt;
#*Identity Providers&lt;br /&gt;
#**(In the dropdown menu): &#039;&#039;&#039;MyProxy OAuth&#039;&#039;&#039;&lt;br /&gt;
#*Servers&lt;br /&gt;
#**Server Type: &#039;&#039;&#039;GridFTP&#039;&#039;&#039;&lt;br /&gt;
#**Server Domain (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Server Port: &#039;&#039;&#039;2811&#039;&#039;&#039;&lt;br /&gt;
#**Subject DN: &#039;&#039;&#039;[Your Subject DN - Ex. /C=US/O=Globus Consortium/OU=Globus Connect Service/CN=1234567...]&#039;&#039;&#039;&lt;br /&gt;
# Click Create Endpoint.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Cluster_to_CASS|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Transfer - Local to Cluster==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your local computer&lt;br /&gt;
&lt;br /&gt;
#If you want to transfer files to or from your local desktop machine, start Globus Connect (Windows: gc.exe, Mac: Applications/Globus) on your local machine. A small status window will appear. When the connection has been made, the dot at the left of the connection will turn green. (MAKE SURE THE KEY IS ADDED - [[Hoffman2:Using_Globus#Globus_Connect_Software|Instructions Above.]]&lt;br /&gt;
#Point your browser at http://www.globus.org, click Sign In, and click Transfer Files. A web page with two Endpoint fields will display.&lt;br /&gt;
#In one Endpoint field, pull down the expand menu and select a site. If you are running Globus Connect, the first name in the list is your local desktop machine. When you select your local desktop machine, a list of your home directory files and directories will be displayed.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#When you select hoffman2#ucla a popup window will ask you for your myproxy server username and passphrase. Enter your Hoffman2 username and its password. Leave the Server DN field as it is. The default Lifetime value is 12 hours. If you are transferring a large amount of data, you may need to increase the Lifetime value. Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 Cluster home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.&lt;br /&gt;
You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;br /&gt;
&lt;br /&gt;
==Transfer - Cluster to CASS==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your Cloud Archival Storage Service&lt;br /&gt;
#Make Sure You have setup the CASS Endpoint. If you have not, go to the [[Hoffman2:Using_Globus# Adding_CASS_Endpoint|section above.]]&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#In one Endpoint field, type in &#039;&#039;&#039;cass.idre.ucla.edu&#039;&#039;&#039; and you should see your &#039;&#039;&#039;[username#groupname].cass.idre.ucla.edu&#039;&#039;&#039; drop down - click on it.&lt;br /&gt;
#Authenticate with your CASS credentials and  Click the Authenticate button.&lt;br /&gt;
#Your CASS home directory and its contents will display.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#Authenticate with your Hoffman2 credentials and  Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2619</id>
		<title>Hoffman2:Using Globus</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2619"/>
		<updated>2014-05-25T07:32:27Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Data_Transfer|Back to Hoffman2:Data_Transfer]]&lt;br /&gt;
&lt;br /&gt;
This page contains many information on how to use Globus to move around files.&lt;br /&gt;
&lt;br /&gt;
==Globus Connect Software (Local Desktop)==&lt;br /&gt;
If you want to transfer files to or from your local desktop machine, you need to download and install the Globus Connect software (one-time). You will need to do this step on each of your desktop machines whose files you want to transfer using Globus.&lt;br /&gt;
&lt;br /&gt;
#Point your browser at http://www.globus.org and click Globus Connect. You will see a popup window in a web page. If you don’t see the popup window, click the Get Globus Connect link on that page.&lt;br /&gt;
#In Step One, click the button corresponding to your local platform (Mac OS, Linux or Windows) to download to your local desktop machine.&lt;br /&gt;
#In Step Two, enter an Endpoint Name to identify your local machine in the Endpoint Name field, (you can ignore the Description field), and click Generate Setup Key.&lt;br /&gt;
#Copy the setup key to some file and save it, since you will not see it again.&lt;br /&gt;
#Install the downloaded Globus Connect software on your local desktop.&lt;br /&gt;
#Run Globus Connect Installation (Windows: globus_connect_install.exe) for the initial setup, and paste in the setup key when prompted.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Adding a CASS Endpoint==&lt;br /&gt;
If you have a UCLA cloud archival storage system, you need to go through this process before moving files between your CASS and other storage systems.&lt;br /&gt;
&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#On the Transfer Files Page, Click on Manage Endpoints (on the top right).&lt;br /&gt;
#Click on Add an Endpoint (a drop down menu should appear)&lt;br /&gt;
#Fill out the Information&lt;br /&gt;
#*Basics&lt;br /&gt;
#**Endpoint Name (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Description = &#039;&#039;&#039;[Any Description Will Do - Ex. MSCOHEN-CASS]&#039;&#039;&#039;&lt;br /&gt;
#**Visible To: &#039;&#039;&#039;Private - Visible only to you&#039;&#039;&#039;&lt;br /&gt;
#**Default Directory: &#039;&#039;&#039;/~/&#039;&#039;&#039;&lt;br /&gt;
#*Identity Providers&lt;br /&gt;
#**(In the dropdown menu): &#039;&#039;&#039;MyProxy OAuth&#039;&#039;&#039;&lt;br /&gt;
#*Servers&lt;br /&gt;
#**Server Type: &#039;&#039;&#039;GridFTP&#039;&#039;&#039;&lt;br /&gt;
#**Server Domain (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Server Port: &#039;&#039;&#039;2811&#039;&#039;&#039;&lt;br /&gt;
#**Subject DN: &#039;&#039;&#039;[Your Subject DN - Ex. /C=US/O=Globus Consortium/OU=Globus Connect Service/CN=1234567...]&#039;&#039;&#039;&lt;br /&gt;
# Click Create Endpoint.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-Cluster_to_Cass|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Transfer - Local to Cluster==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your local computer&lt;br /&gt;
&lt;br /&gt;
#If you want to transfer files to or from your local desktop machine, start Globus Connect (Windows: gc.exe, Mac: Applications/Globus) on your local machine. A small status window will appear. When the connection has been made, the dot at the left of the connection will turn green. (MAKE SURE THE KEY IS ADDED ([[Hoffman2:Data_Transfer#Globus_Connect_software|Instructions Here.]])&lt;br /&gt;
#Point your browser at http://www.globus.org, click Sign In, and click Transfer Files. A web page with two Endpoint fields will display.&lt;br /&gt;
#In one Endpoint field, pull down the expand menu and select a site. If you are running Globus Connect, the first name in the list is your local desktop machine. When you select your local desktop machine, a list of your home directory files and directories will be displayed.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#When you select hoffman2#ucla a popup window will ask you for your myproxy server username and passphrase. Enter your Hoffman2 username and its password. Leave the Server DN field as it is. The default Lifetime value is 12 hours. If you are transferring a large amount of data, you may need to increase the Lifetime value. Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 Cluster home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.&lt;br /&gt;
You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;br /&gt;
&lt;br /&gt;
==Transfer - Cluster to CASS==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your Cloud Archival Storage Service&lt;br /&gt;
#Make Sure You have setup the CASS Endpoint. If you have not, go to the [[Hoffman2:Using_Globus# Adding_CASS_Endpoint|section below.]]&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#In one Endpoint field, type in &#039;&#039;&#039;cass.idre.ucla.edu&#039;&#039;&#039; and you should see your &#039;&#039;&#039;[username#groupname].cass.idre.ucla.edu&#039;&#039;&#039; drop down - click on it.&lt;br /&gt;
#Authenticate with your CASS credentials and  Click the Authenticate button.&lt;br /&gt;
#Your CASS home directory and its contents will display.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#Authenticate with your Hoffman2 credentials and  Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Data_Transfer&amp;diff=2618</id>
		<title>Hoffman2:Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Data_Transfer&amp;diff=2618"/>
		<updated>2014-05-25T07:20:28Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2|Back to all things Hoffman2]]&lt;br /&gt;
&lt;br /&gt;
So you got a whole bunch of data that&#039;s sitting on your hard drive or somewhere else, and you need it on hoffman2. How do you get that gigs and gigs of data onto the cluster? Sorry there&#039;s no magical genie that instantly does it for you. You&#039;re going to have to slowly transfer it through your network on to Hoffman2. PLEASE NOTE: TRANSFER IS ONLY AS FAST AS YOUR INTERNET CONNECTION!&lt;br /&gt;
&lt;br /&gt;
Here&#039;s several ways of pushing that cow up the hill.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Globus Online==&lt;br /&gt;
Globus Online is a tool that abstracts a lot of complexity from the data transfer process. It is a GUI system that is capable of starting a transfer, having the internet connection broken and re-established, and then continuing and finishing the transfer automatically.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;It is the fastest way to transfer data to Hoffman2&#039;&#039;&#039; because it has a faster connection than other Hoffman2 nodes.&lt;br /&gt;
&lt;br /&gt;
====Create a Globus Account====&lt;br /&gt;
First you need to create a free, Globus account (one-time):&lt;br /&gt;
*Point your browser at http://www.globus.org and click Sign Up.&lt;br /&gt;
*On the Create an Account page, fill in the information (your name, email address, username, password, etc.) and read the terms, then click Register.&lt;br /&gt;
You will receive an email with a link which you need to follow to confirm your new Globus account.&lt;br /&gt;
&lt;br /&gt;
====General Usage====&lt;br /&gt;
After creating an account, you need to setup endpoints to transfer from/to.&lt;br /&gt;
*Hoffman2 Endpoint&lt;br /&gt;
Hoffman2 Endpoint is already setup on Globus, and you just need to type &#039;&#039;&#039;hoffman2#ucla&#039;&#039;&#039; into the endpoint field.&lt;br /&gt;
*CASS Endpoint&lt;br /&gt;
[http://www.cass.idre.ucla.edu/ (Cloud Archival Storage Service)] You Need to Set this up on Globus before you can use it.&lt;br /&gt;
[[Hoffman2:Using_Globus#Adding_a_CASS_Endpoint | Directions Here.]]&lt;br /&gt;
*Local Desktop Endpoint&lt;br /&gt;
If you want to transfer files between your local desktop, you need to download the Globus Desktop Client and set it up on Globus before you can transfer any files.&lt;br /&gt;
[[Hoffman2:Using_Globus#Globus_Connect_Software | Directions Here.]]&lt;br /&gt;
&lt;br /&gt;
====Transfer Files====&lt;br /&gt;
*[[Hoffman2:Using_Globus#Transfer_-_Cluster_to_CASS| Transfer -&amp;gt; Cluster to CASS]]&lt;br /&gt;
&lt;br /&gt;
*[[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster| Transfer -&amp;gt; Local to Cluster]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For more information, see the IDRE instructions here&lt;br /&gt;
[http://hpc.ucla.edu/hoffman2/file-transfer/gol.php  Globus Online]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==scp==&lt;br /&gt;
A command line tool for copying files using secure encrypted channels.&lt;br /&gt;
&lt;br /&gt;
====General Usage====&lt;br /&gt;
 scp username@FROM:location username@TO:location&lt;br /&gt;
&lt;br /&gt;
====Hoffman2 Examples====&lt;br /&gt;
Copy a file from local computer to Hoffman2&lt;br /&gt;
 scp /path/to/local/file username@dtn2.hoffman2.idre.ucla.edu:/path/to/destination/for/copy&lt;br /&gt;
&lt;br /&gt;
Copy a file from Hoffman2 to local computer&lt;br /&gt;
 scp username@dtn2.hoffman2.idre.ucla.edu:/path/to/file /path/to/destination/for/copy&lt;br /&gt;
&lt;br /&gt;
Copy a directory from local computer to Hoffman2&lt;br /&gt;
 scp -r /path/to/local/directory username@dtn2.hoffman2.idre.ucla.edu:/path/to/destination/for/copy&lt;br /&gt;
&lt;br /&gt;
Copy a directory from Hoffman2 to local computer&lt;br /&gt;
 scp -r username@dtn2.hoffman2.idre.ucla.edu:/path/to/directory /path/to/destination/for/copy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==sftp==&lt;br /&gt;
Another command line tool that uses secure encrypted channels.  There are also GUIs that use this protocol (like [http://cyberduck.ch/ Cyberduck] or [http://filezilla-project.org/ Filezilla]).&lt;br /&gt;
&lt;br /&gt;
====General Usage====&lt;br /&gt;
Log in to the server, then punch in your password when prompted and you&#039;ll be logged in.&lt;br /&gt;
 sftp USERNAME@SERVERADDRESS&lt;br /&gt;
Pull down a file from the server&lt;br /&gt;
 get /path/to/server/file /path/to/destination/for/copy&lt;br /&gt;
Push a file to the server&lt;br /&gt;
 put /path/to/local/file /path/to/destination/for/copy/on/server&lt;br /&gt;
Log out.&lt;br /&gt;
 bye&lt;br /&gt;
&lt;br /&gt;
====Hoffman2 Example====&lt;br /&gt;
 sftp USERNAME@dtn2.hoffman2.idre.ucla.edu&lt;br /&gt;
 get /server/file /local/file&lt;br /&gt;
 put /local/file /server/file&lt;br /&gt;
 bye&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==rsync==&lt;br /&gt;
Implied in the name, syncs folder/files between filesystems. It also makes use of secure encrypted channels.&lt;br /&gt;
&lt;br /&gt;
====General Usage====&lt;br /&gt;
 rsync [OPTION] … SRC [SRC] … [USER@]HOST:DEST&lt;br /&gt;
&lt;br /&gt;
====Hoffman2 Example====&lt;br /&gt;
We recommend using something like&lt;br /&gt;
 rsync -av /PATH/TO/SRC/FILES/HERE USERNAME@dtn2.hoffman2.idre.ucla.edu:~&lt;br /&gt;
to upload &#039;&#039;/PATH/TO/SRC/FILES/HERE&#039;&#039; from your local machine to your home directory on Hoffman2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External Links==&lt;br /&gt;
*[http://linux.die.net/man/1/scp Man page for scp]&lt;br /&gt;
*[http://linux.die.net/man/1/sftp Man page for sftp]&lt;br /&gt;
*[http://linux.die.net/man/1/rsync Man page for rsync]&lt;br /&gt;
*[http://www.ats.ucla.edu/clusters/hoffman2/gol.htm  Globus Online]&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2617</id>
		<title>Hoffman2:Using Globus</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2617"/>
		<updated>2014-05-25T07:17:07Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Data_Transfer|Back to Hoffman2:Data_Transfer]]&lt;br /&gt;
&lt;br /&gt;
This page contains many information on how to use Globus to move around files.&lt;br /&gt;
&lt;br /&gt;
==Globus Connect Software (Local Desktop)==&lt;br /&gt;
If you want to transfer files to or from your local desktop machine, you need to download and install the Globus Connect software (one-time). You will need to do this step on each of your desktop machines whose files you want to transfer using Globus.&lt;br /&gt;
&lt;br /&gt;
#Point your browser at http://www.globus.org and click Globus Connect. You will see a popup window in a web page. If you don’t see the popup window, click the Get Globus Connect link on that page.&lt;br /&gt;
#In Step One, click the button corresponding to your local platform (Mac OS, Linux or Windows) to download to your local desktop machine.&lt;br /&gt;
#In Step Two, enter an Endpoint Name to identify your local machine in the Endpoint Name field, (you can ignore the Description field), and click Generate Setup Key.&lt;br /&gt;
#Copy the setup key to some file and save it, since you will not see it again.&lt;br /&gt;
#Install the downloaded Globus Connect software on your local desktop.&lt;br /&gt;
#Run Globus Connect Installation (Windows: globus_connect_install.exe) for the initial setup, and paste in the setup key when prompted.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Adding a CASS Endpoint==&lt;br /&gt;
If you have a UCLA cloud archival storage system, you need to go through this process before moving files between your CASS and other storage systems.&lt;br /&gt;
&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#On the Transfer Files Page, Click on Manage Endpoints (on the top right).&lt;br /&gt;
#Click on Add an Endpoint (a drop down menu should appear)&lt;br /&gt;
#Fill out the Information&lt;br /&gt;
#*Basics&lt;br /&gt;
#**Endpoint Name (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Description = &#039;&#039;&#039;[Any Description Will Do - Ex. MSCOHEN-CASS]&#039;&#039;&#039;&lt;br /&gt;
#**Visible To: &#039;&#039;&#039;Private - Visible only to you&#039;&#039;&#039;&lt;br /&gt;
#**Default Directory: &#039;&#039;&#039;/~/&#039;&#039;&#039;&lt;br /&gt;
#*Identity Providers&lt;br /&gt;
#**(In the dropdown menu): &#039;&#039;&#039;MyProxy OAuth&#039;&#039;&#039;&lt;br /&gt;
#*Servers&lt;br /&gt;
#**Server Type: &#039;&#039;&#039;GridFTP&#039;&#039;&#039;&lt;br /&gt;
#**Server Domain (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Server Port: &#039;&#039;&#039;2811&#039;&#039;&#039;&lt;br /&gt;
#**Subject DN: &#039;&#039;&#039;[Your Subject DN - Ex. /C=US/O=Globus Consortium/OU=Globus Connect Service/CN=1234567...]&#039;&#039;&#039;&lt;br /&gt;
# Click Create Endpoint.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Cluster_to_CASS|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Transfer - Local to Cluster==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your local computer&lt;br /&gt;
&lt;br /&gt;
#If you want to transfer files to or from your local desktop machine, start Globus Connect (Windows: gc.exe, Mac: Applications/Globus) on your local machine. A small status window will appear. When the connection has been made, the dot at the left of the connection will turn green. (MAKE SURE THE KEY IS ADDED - [[Hoffman2:Using_Globus#Globus_Connect_Software|Instructions Above.]]&lt;br /&gt;
#Point your browser at http://www.globus.org, click Sign In, and click Transfer Files. A web page with two Endpoint fields will display.&lt;br /&gt;
#In one Endpoint field, pull down the expand menu and select a site. If you are running Globus Connect, the first name in the list is your local desktop machine. When you select your local desktop machine, a list of your home directory files and directories will be displayed.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#When you select hoffman2#ucla a popup window will ask you for your myproxy server username and passphrase. Enter your Hoffman2 username and its password. Leave the Server DN field as it is. The default Lifetime value is 12 hours. If you are transferring a large amount of data, you may need to increase the Lifetime value. Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 Cluster home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.&lt;br /&gt;
You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;br /&gt;
&lt;br /&gt;
==Transfer - Cluster to CASS==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your Cloud Archival Storage Service&lt;br /&gt;
#Make Sure You have setup the CASS Endpoint. If you have not, go to the [[Hoffman2:Using_Globus# Adding_CASS_Endpoint|section above.]]&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#In one Endpoint field, type in &#039;&#039;&#039;cass.idre.ucla.edu&#039;&#039;&#039; and you should see your &#039;&#039;&#039;[username#groupname].cass.idre.ucla.edu&#039;&#039;&#039; drop down - click on it.&lt;br /&gt;
#Authenticate with your CASS credentials and  Click the Authenticate button.&lt;br /&gt;
#Your CASS home directory and its contents will display.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#Authenticate with your Hoffman2 credentials and  Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
	<entry>
		<id>https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2616</id>
		<title>Hoffman2:Using Globus</title>
		<link rel="alternate" type="text/html" href="https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Using_Globus&amp;diff=2616"/>
		<updated>2014-05-25T06:49:26Z</updated>

		<summary type="html">&lt;p&gt;Acho: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Hoffman2:Data_Transfer|Back to Hoffman2:Data_Transfer]]&lt;br /&gt;
&lt;br /&gt;
This page contains many information on how to use Globus to move around files.&lt;br /&gt;
&lt;br /&gt;
==Globus Connect Software (Local Desktop)==&lt;br /&gt;
If you want to transfer files to or from your local desktop machine, you need to download and install the Globus Connect software (one-time). You will need to do this step on each of your desktop machines whose files you want to transfer using Globus.&lt;br /&gt;
&lt;br /&gt;
#Point your browser at http://www.globus.org and click Globus Connect. You will see a popup window in a web page. If you don’t see the popup window, click the Get Globus Connect link on that page.&lt;br /&gt;
#In Step One, click the button corresponding to your local platform (Mac OS, Linux or Windows) to download to your local desktop machine.&lt;br /&gt;
#In Step Two, enter an Endpoint Name to identify your local machine in the Endpoint Name field, (you can ignore the Description field), and click Generate Setup Key.&lt;br /&gt;
#Copy the setup key to some file and save it, since you will not see it again.&lt;br /&gt;
#Install the downloaded Globus Connect software on your local desktop.&lt;br /&gt;
#Run Globus Connect Installation (Windows: globus_connect_install.exe) for the initial setup, and paste in the setup key when prompted.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-_Local_to_Cluster|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Adding a CASS Endpoint==&lt;br /&gt;
If you have a UCLA cloud archival storage system, you need to go through this process before moving files between your CASS and other storage systems.&lt;br /&gt;
&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#On the Transfer Files Page, Click on Manage Endpoints (on the top right).&lt;br /&gt;
#Click on Add an Endpoint (a drop down menu should appear)&lt;br /&gt;
#Fill out the Information&lt;br /&gt;
#*Basics&lt;br /&gt;
#**Endpoint Name (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Description = &#039;&#039;&#039;[Any Description Will Do - Ex. MSCOHEN-CASS]&#039;&#039;&#039;&lt;br /&gt;
#**Visible To: &#039;&#039;&#039;Private - Visible only to you&#039;&#039;&#039;&lt;br /&gt;
#**Default Directory: &#039;&#039;&#039;/~/&#039;&#039;&#039;&lt;br /&gt;
#*Identity Providers&lt;br /&gt;
#**(In the dropdown menu): &#039;&#039;&#039;MyProxy OAuth&#039;&#039;&#039;&lt;br /&gt;
#*Servers&lt;br /&gt;
#**Server Type: &#039;&#039;&#039;GridFTP&#039;&#039;&#039;&lt;br /&gt;
#**Server Domain (Substitute GROUP with your own Groupname) = &#039;&#039;&#039;[GROUP].cass.idre.ucla.edu&#039;&#039;&#039;&lt;br /&gt;
#**Server Port: &#039;&#039;&#039;2811&#039;&#039;&#039;&lt;br /&gt;
#**Subject DN: &#039;&#039;&#039;[Your Subject DN - Ex. /C=US/O=Globus Consortium/OU=Globus Connect Service/CN=1234567...]&#039;&#039;&#039;&lt;br /&gt;
# Click Create Endpoint.&lt;br /&gt;
#You should now be able to transfer files by following the [[Hoffman2:Using_Globus#Transfer_-Cluster_to_Cass|directions below.]]&lt;br /&gt;
&lt;br /&gt;
==Transfer - Local to Cluster==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your local computer&lt;br /&gt;
&lt;br /&gt;
#If you want to transfer files to or from your local desktop machine, start Globus Connect (Windows: gc.exe, Mac: Applications/Globus) on your local machine. A small status window will appear. When the connection has been made, the dot at the left of the connection will turn green. (MAKE SURE THE KEY IS ADDED ([[Hoffman2:Data_Transfer#Globus_Connect_software|Instructions Here.]])&lt;br /&gt;
#Point your browser at http://www.globus.org, click Sign In, and click Transfer Files. A web page with two Endpoint fields will display.&lt;br /&gt;
#In one Endpoint field, pull down the expand menu and select a site. If you are running Globus Connect, the first name in the list is your local desktop machine. When you select your local desktop machine, a list of your home directory files and directories will be displayed.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#When you select hoffman2#ucla a popup window will ask you for your myproxy server username and passphrase. Enter your Hoffman2 username and its password. Leave the Server DN field as it is. The default Lifetime value is 12 hours. If you are transferring a large amount of data, you may need to increase the Lifetime value. Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 Cluster home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.&lt;br /&gt;
You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;br /&gt;
&lt;br /&gt;
==Transfer - Cluster to CASS==&lt;br /&gt;
How to transfer files between the Hoffman2 Cluster and your Cloud Archival Storage Service&lt;br /&gt;
#Make Sure You have setup the CASS Endpoint. If you have not, go to the [[Hoffman2:Using_Globus# Adding_CASS_Endpoint|section below.]]&lt;br /&gt;
#Point your browser to http://www.globus.org and sign in.&lt;br /&gt;
#On the top right, Click on the Quick Links -&amp;gt; Transfer Files&lt;br /&gt;
#In one Endpoint field, type in &#039;&#039;&#039;cass.idre.ucla.edu&#039;&#039;&#039; and you should see your &#039;&#039;&#039;[username#groupname].cass.idre.ucla.edu&#039;&#039;&#039; drop down - click on it.&lt;br /&gt;
#Authenticate with your CASS credentials and  Click the Authenticate button.&lt;br /&gt;
#Your CASS home directory and its contents will display.&lt;br /&gt;
#In the other Endpoint field, either pull down the expand menu and select hoffman2#ucla, or type in hoffman2#ucla.&lt;br /&gt;
#Authenticate with your Hoffman2 credentials and  Click the Authenticate button.&lt;br /&gt;
#Your Hoffman2 home directory and its contents will display.&lt;br /&gt;
#To transfer files between endpoints, select a file or directory from each list, then click one of the large arrow buttons to tell Globus the desired direction of the transfer.You will receive an automatic email from Globus Notification (notify@globus.org) when the file transfer has completed. To have Globus show you the status and history of your file transfers, from its Go To pull-down menu, select View Transfers.&lt;/div&gt;</summary>
		<author><name>Acho</name></author>
	</entry>
</feed>