diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000000000000000000000000000000000000..f288702d2fa16d3cdf0035b15a9fcbc552cd88e7
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,674 @@
+                    GNU GENERAL PUBLIC LICENSE
+                       Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+  The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works.  By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users.  We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors.  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+  To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights.  Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received.  You must make sure that they, too, receive
+or can get the source code.  And you must show them these terms so they
+know their rights.
+
+  Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+  For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software.  For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+  Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so.  This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software.  The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable.  Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products.  If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+  Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary.  To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                       TERMS AND CONDITIONS
+
+  0. Definitions.
+
+  "This License" refers to version 3 of the GNU General Public License.
+
+  "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+  "The Program" refers to any copyrightable work licensed under this
+License.  Each licensee is addressed as "you".  "Licensees" and
+"recipients" may be individuals or organizations.
+
+  To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy.  The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+  A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+  To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy.  Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+  To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies.  Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+  An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License.  If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+  1. Source Code.
+
+  The "source code" for a work means the preferred form of the work
+for making modifications to it.  "Object code" means any non-source
+form of a work.
+
+  A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+  The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form.  A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+  The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities.  However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work.  For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+  The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+  The Corresponding Source for a work in source code form is that
+same work.
+
+  2. Basic Permissions.
+
+  All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met.  This License explicitly affirms your unlimited
+permission to run the unmodified Program.  The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work.  This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+  You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force.  You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright.  Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+  Conveying under any other circumstances is permitted solely under
+the conditions stated below.  Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+  3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+  No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+  When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+  4. Conveying Verbatim Copies.
+
+  You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+  You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+  5. Conveying Modified Source Versions.
+
+  You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+    a) The work must carry prominent notices stating that you modified
+    it, and giving a relevant date.
+
+    b) The work must carry prominent notices stating that it is
+    released under this License and any conditions added under section
+    7.  This requirement modifies the requirement in section 4 to
+    "keep intact all notices".
+
+    c) You must license the entire work, as a whole, under this
+    License to anyone who comes into possession of a copy.  This
+    License will therefore apply, along with any applicable section 7
+    additional terms, to the whole of the work, and all its parts,
+    regardless of how they are packaged.  This License gives no
+    permission to license the work in any other way, but it does not
+    invalidate such permission if you have separately received it.
+
+    d) If the work has interactive user interfaces, each must display
+    Appropriate Legal Notices; however, if the Program has interactive
+    interfaces that do not display Appropriate Legal Notices, your
+    work need not make them do so.
+
+  A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit.  Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+  6. Conveying Non-Source Forms.
+
+  You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+    a) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by the
+    Corresponding Source fixed on a durable physical medium
+    customarily used for software interchange.
+
+    b) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by a
+    written offer, valid for at least three years and valid for as
+    long as you offer spare parts or customer support for that product
+    model, to give anyone who possesses the object code either (1) a
+    copy of the Corresponding Source for all the software in the
+    product that is covered by this License, on a durable physical
+    medium customarily used for software interchange, for a price no
+    more than your reasonable cost of physically performing this
+    conveying of source, or (2) access to copy the
+    Corresponding Source from a network server at no charge.
+
+    c) Convey individual copies of the object code with a copy of the
+    written offer to provide the Corresponding Source.  This
+    alternative is allowed only occasionally and noncommercially, and
+    only if you received the object code with such an offer, in accord
+    with subsection 6b.
+
+    d) Convey the object code by offering access from a designated
+    place (gratis or for a charge), and offer equivalent access to the
+    Corresponding Source in the same way through the same place at no
+    further charge.  You need not require recipients to copy the
+    Corresponding Source along with the object code.  If the place to
+    copy the object code is a network server, the Corresponding Source
+    may be on a different server (operated by you or a third party)
+    that supports equivalent copying facilities, provided you maintain
+    clear directions next to the object code saying where to find the
+    Corresponding Source.  Regardless of what server hosts the
+    Corresponding Source, you remain obligated to ensure that it is
+    available for as long as needed to satisfy these requirements.
+
+    e) Convey the object code using peer-to-peer transmission, provided
+    you inform other peers where the object code and Corresponding
+    Source of the work are being offered to the general public at no
+    charge under subsection 6d.
+
+  A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+  A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling.  In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage.  For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product.  A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+  "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source.  The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+  If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information.  But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+  The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed.  Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+  Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+  7. Additional Terms.
+
+  "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law.  If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+  When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it.  (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.)  You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+  Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+    a) Disclaiming warranty or limiting liability differently from the
+    terms of sections 15 and 16 of this License; or
+
+    b) Requiring preservation of specified reasonable legal notices or
+    author attributions in that material or in the Appropriate Legal
+    Notices displayed by works containing it; or
+
+    c) Prohibiting misrepresentation of the origin of that material, or
+    requiring that modified versions of such material be marked in
+    reasonable ways as different from the original version; or
+
+    d) Limiting the use for publicity purposes of names of licensors or
+    authors of the material; or
+
+    e) Declining to grant rights under trademark law for use of some
+    trade names, trademarks, or service marks; or
+
+    f) Requiring indemnification of licensors and authors of that
+    material by anyone who conveys the material (or modified versions of
+    it) with contractual assumptions of liability to the recipient, for
+    any liability that these contractual assumptions directly impose on
+    those licensors and authors.
+
+  All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10.  If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term.  If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+  If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+  Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+  8. Termination.
+
+  You may not propagate or modify a covered work except as expressly
+provided under this License.  Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+  However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+  Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+  Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License.  If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+  9. Acceptance Not Required for Having Copies.
+
+  You are not required to accept this License in order to receive or
+run a copy of the Program.  Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance.  However,
+nothing other than this License grants you permission to propagate or
+modify any covered work.  These actions infringe copyright if you do
+not accept this License.  Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+  10. Automatic Licensing of Downstream Recipients.
+
+  Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License.  You are not responsible
+for enforcing compliance by third parties with this License.
+
+  An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations.  If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+  You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License.  For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+  11. Patents.
+
+  A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based.  The
+work thus licensed is called the contributor's "contributor version".
+
+  A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version.  For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+  Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+  In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement).  To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+  If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients.  "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+  If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+  A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License.  You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+  Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+  12. No Surrender of Others' Freedom.
+
+  If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all.  For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+  13. Use with the GNU Affero General Public License.
+
+  Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work.  The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+  14. Revised Versions of this License.
+
+  The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+  Each version is given a distinguishing version number.  If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation.  If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+  If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+  Later license versions may give you additional or different
+permissions.  However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+  15. Disclaimer of Warranty.
+
+  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+  16. Limitation of Liability.
+
+  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+  17. Interpretation of Sections 15 and 16.
+
+  If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software: you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation, either version 3 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program.  If not, see <https://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+  If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+    <program>  Copyright (C) <year>  <name of author>
+    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+  You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+<https://www.gnu.org/licenses/>.
+
+  The GNU General Public License does not permit incorporating your program
+into proprietary programs.  If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.  But first, please read
+<https://www.gnu.org/licenses/why-not-lgpl.html>.
diff --git a/check_utilisation.py b/check_utilisation.py
new file mode 100755
index 0000000000000000000000000000000000000000..bf06b1877227f374a7462f15083fcc3987455766
--- /dev/null
+++ b/check_utilisation.py
@@ -0,0 +1,557 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
+'''
+This script checks the utilisation of HPC jobs.
+Run the program to get help on usage.
+
+Note to installers: 
+1. This program should be installed on just the login node under /opt/eresearch/
+2. The public version of the users database needs to be updated every time the
+   main database is updated.
+
+Notes:
+Cannot use job key 'euser'from pbs_connect(pbs_server):
+  At first I was using job['euser'] from the jobs returned from PBS to get the user 
+  running the job. It turns out that this key is only available to PBS admin users 
+  and not ordinary users. If you do a "qstat -f job_id" of a job as a PBS admin and
+  as a user you will see that a few attributes are missing to users. 
+  You need to use 'job_owner' as a key. The attribute job['euser'] is like u999777 
+  while j['job_owner'] is u999777@hpcnode1 where the last part is the login node name.
+
+Author: Mike Lake
+Releases: 
+2021-04-29 First release.
+
+'''
+
+import argparse
+import sys, os, re
+import pwd
+import datetime
+
+# Append what ever pbs directory is under the directory that this script is located
+# in. This ensures that we use /opt/eresearch/pbs for the version used by users and 
+# whatever pbs is under this script if its a development version.
+sys.path.append(os.path.join(os.path.dirname(sys.argv[0]), 'pbs'))
+
+import pbs
+from pbsutils import get_jobs, job_attributes_reformat
+from sqlite3 import dbapi2 as sqlite
+import smtplib
+
+#####################
+# Set parameters here
+#####################
+
+# You need to set the hostname of the PBS Server
+pbs_server = 'hpcnode0'
+
+# Name of the users database. The "public" one has some fields removed. This is because 
+# we plan to make this script and database available to users under /usr/local/bin/.
+# Do not use the full path here. It needs to reside in the same directory as this script.
+#users_db_name = 'users_ldap_public.db'
+users_db_name = 'users_ldap.db'
+
+# Target utilisation in percent. Anything below this will be tagged with "CHECK".
+target = 80
+
+# Number of past days to search for finished jobs from PBS history.
+# This must be an integer from 1 upwards. Usually 7 would be suitable.
+past_days = 5
+
+# Filename for the HTML output file.
+html_output = 'check_utilisation.html'
+
+# This line must be set to your email address.
+from_email = 'Mike.Lake@uts.edu.au'
+
+# Your login nodes mail server.
+mail_server='postoffice.uts.edu.au'
+
+prefix = '''
+<p>Hi</p>
+
+<p>The HPC is occasionally very busy and it is better for all users if we try to improve the 
+throughpout of jobs. Sometimes there are jobs that are requesting more CPU cores (ncpus) than the jobs 
+are capable of using. When you ask for 8 cores and only use 1 core, 7 cores lay idle. 
+Those cores could have been used by other researchers. 
+As an example, a simple python program is single threaded and can only ever use one core.</p> 
+
+<p>In the table below you will see your job(s). Consider the CPU and TIME "Utilisation" columns. 
+For each job those values should be close to 100%. Consider them like your high school reports :-)
+A description of these fields can be found under the table.</p>
+
+</p>If you are going to start a job then please consider how many cores (ncpus) your job really can utilise. 
+During your run use "<code>qstat -f job_id</code>" and after the run "<code>qstat -fx job_id</code>" 
+to see if your job used the cores that you requested. The same can be done for memory and walltime. 
+Do not ask for more than your job requires.</p>
+
+<p>If you have any questions just email me and I'll try to assist.</p>
+'''
+
+postfix = '''
+<p>What is "cpu%" ? <br>
+The PBS scheduler polls all jobs every few minutes and calculates an integer
+value called "cpupercent" at each polling cycle. This is a moving weighted average
+of CPU usage for the cycle, given as the average percentage usage of one CPU.
+For example, a value of 50 means that during a certain period, the job used 50
+percent of one CPU. A value of 300 means that during the period, the job used
+an average of three CPUs. You can find the cpupercent used from the <code>qstat</code> command.
+</p>
+
+<p>What is "CPU Utilisation %" ? <br>
+This is what I have calculated. It's the cpupercent / ncpus requested.<br>
+If you ask for 1 core and use it fully then this will be close to 100%. <br>
+If you ask for 3 cores and use all of those then this will be 300%/3 = 100% again. <br>
+If you ask for 3 cores and use 1 core it will be about 33%. You do not get a pass mark :-)  
+</p>
+'''
+
+########################
+# Functions defined here
+########################
+
+def getKey(item):
+    # You can change this to job_id, resources_time_left etc.
+    return item['job_owner']
+
+def getKey2(item, string='job_owner'):
+    # Usage: jobs = sorted(jobs, key=getKey(istring='job_owner'))
+    #return item['resources_time_left']
+    return item[string]
+
+def print_table_start():
+    '''
+    This prints the start of the table of usage to the screen and
+    also to the HTML output file. The format will be like this:
+
+    Job ID  Job Owner  Job Name  Select Statement ncpus  cpu%  cputime  walltime  CPU Util  TIME Util  Comment
+                                                               (hours)   (hours) (percent)  (percent)
+    '''
+
+    # Append tuples of (heading, field width, units)
+    heading = []
+    heading.append(('Job ID',            9, ' '))
+    heading.append(('Job Owner',        11, ' '))
+    heading.append(('Job Name',         17, ' '))
+    heading.append(('Select Statement', 22, ' ' ))
+    heading.append(('ncpus',             6, ' ' ))
+    heading.append(('cpu%',              6, ' ' ))
+    heading.append(('cputime',           9, '(hours)' ))
+    heading.append(('walltime',         10, '(hours)' ))
+    heading.append(('CPU Util',         10, '(percent)' ))
+    heading.append(('TIME Util',        11, '(percent)' ))
+    heading.append(('Comment',           9, ' '))
+
+    # Print table heading on one line.
+    for item in heading: 
+        print(item[0].rjust(item[1]), end='')
+    print('')
+
+    # Print units for each table header on the next line.
+    for item in heading: 
+        print(item[2].rjust(item[1]), end='')
+    print('')
+
+    html="<table border=1 cellpadding=4>\n<tr>\n"
+
+    for item in heading:
+        if item[0] == 'CPU Util':
+            html = html + "<th>CPU<br>Utilisation</th>\n"
+        elif item[0] == 'TIME Util':
+            html = html + "<th>TIME<br>Utilisation</th>\n"
+        else:
+            html = html + "<th>%s</th>\n" % item[0]
+
+    html = html + "</tr>\n"
+
+    return html
+
+def print_table_end():
+
+    # Get a formatted date and time for use in the HTML report.
+    date_time = datetime.datetime.now().strftime('%Y-%m-%d at %I:%M %p')
+    # print() take care of converting each element to a string.
+    html = '</table>\n'
+    # Create a string being the basename of the program iand then append its args. 
+    # We slice sys.argv as we don't as its first element which is the full path 
+    # to the program.
+    program_name = os.path.basename(sys.argv[0])
+    invocation = program_name + ' ' + ' '.join([str(s) for s in sys.argv[1:]])
+    html = html + "<p>HPC Utilisation Report created on %s from program <code>%s</code></p>\n" % \
+        (date_time, invocation)
+    return html
+
+def print_jobs(jobs, fh):
+
+    for job in jobs:
+        # print(job['job_id'])
+        # 136087.hpcnode0 did not have a cpu_percent value
+        # comment = Not Running: Insufficient amount of resource: ncpus  and terminated.
+        row = []
+
+        ###################################
+        # Calculate a CPU utilisation value
+        ###################################
+        # A queued job or an array parent job wil not have these attributes set
+        # so we just assign cpu_utilisation to be zero.
+        try:
+            cpu_utilisation = float(job['resources_used_cpupercent']) / int(job['resources_used_ncpus'])
+        except:
+            cpu_utilisation = 0.0
+
+        ####################################
+        # Calculate a time utilisation value
+        ####################################
+
+        (cpu_hours, cpu_mins, cpu_secs) = job['resources_used_cput'].split(':')
+        (wall_hours, wall_mins, wall_secs) = job['resources_used_walltime'].split(':')
+
+        # Now we change these hours and mins into decimal hours.
+        cpu_hours = float(cpu_hours) + float(cpu_mins)/60.0 + float(cpu_secs)/3660.0
+        wall_hours = float(wall_hours) + float(wall_mins)/60.0 + float(wall_secs)/3600.0
+        # Note: if a job has just started the walltime might be zero and the
+        # calculated wall_hours will be 0. In this case set it to be a small 
+        # nominal value such as 0.1 hours.
+        if wall_hours == 0:
+            wall_hours = 0.1
+        
+        # Here we have to cast ncpus to a float so we can perform math operations with the other floats. 
+        # Note: A queued job or an array parent job will not have these attributes set.
+        try:
+            ncpus = float(job['resources_used_ncpus'])
+        except:
+            ncpus = 0
+
+        # If ncpus is zero or wall hours is zero (likely, if a job is queued) we
+        # will get a divide-by-zero. In this case set time_utilisation to zero.
+        # We should not get this though as in this script we remove queued jobs.
+        try:
+            time_utilisation = float(100.0*cpu_hours/ncpus/wall_hours)
+        except:
+            time_utilisation = 0
+        
+        # The following are still strings so we can use rjust() for formatting.
+        # Nor do we need to format them for HTML output.
+        print(job['job_id'].rjust(9), end='');                      row.append(job['job_id'])
+        print(job['job_owner'].rjust(11), end='');                  row.append(job['job_owner'])
+        print(job['job_name'][0:15].rjust(17), end='');             row.append(job['job_name'])
+        print(job['resource_list_select'][0:20].rjust(22), end=''); row.append(job['resource_list_select'][0:20])
+        print(job['resources_used_ncpus'].rjust(6), end='');        row.append(job['resources_used_ncpus'])
+        print(job['resources_used_cpupercent'].rjust(6), end='');   row.append(job['resources_used_cpupercent'])
+
+        # These vars are floats.
+        s = '{:9.1f}'.format(cpu_hours);           print(s, end=''); row.append(s)
+        s = '{:10.1f}'.format(wall_hours);         print(s, end=''); row.append(s)
+        s = '{:9.1f}%'.format(cpu_utilisation);    print(s, end=''); row.append(s)
+        s = '{:10.1f}%'.format(time_utilisation);  print(s, end=''); row.append(s)
+
+        if cpu_utilisation > target and time_utilisation > target:
+            # Both CPU utilisation and TIME utilisation are more than our target.
+            comment='<span style="color:green;">Good</span>'
+            print ("  Good"); 
+        else:
+            comment='<span style="color:red;">CHECK !</span>'
+            print ("  CHECK !")
+
+        row.append(comment)
+
+        fh.write('<tr>\n')
+        for item in row:
+            fh.write('  <td>%s</td>\n' % item)
+        fh.write('</tr>\n')
+
+def get_user_email(users_db_path, user_id):
+    '''
+    Given a user_id return their email from the database of users.
+    If the user is not in the database return None for the email.
+    '''
+
+    # Strip off the leading 'u' from the user ID as the database does not contain this.
+    user_id = user_id.lstrip('u')
+    try:
+        con = sqlite.connect(users_db_path)
+    except: 
+        print ("Error: Can\'t connect to the database.") 
+        sys.exit()
+
+    con.row_factory = sqlite.Row
+    cur = con.cursor()
+    cur.execute('SELECT uts_email FROM users where uts_id="%s"' % user_id)
+    row = cur.fetchone()
+    cur.close
+    con.close
+
+    # We returned row will be a tuple (email,) if the user exists, otherwise 
+    # the row will be None. 
+    if row is not None:
+        email = row[0]
+    else:
+        email = None
+
+    return email
+
+def send_email(from_email, recipient_email, message_body):
+    '''
+    This takes a "from" email, a "recipient" to send the email to, and 
+    a string "message_body" which has to be a filename of the file 
+    containing the body of the message to send. 
+    The email will be a HTML formatted email because the message body 
+    is a HTML file.
+    '''
+
+    # Mail header for a HTML email. Do not forget that newline!
+    mail_header = """From: <%s>
+To: <%s>
+Subject: HPC Utilisation Report
+MIME-Version: 1.0
+Content-Type: text/html; charset="us-ascii"
+Content-Transfer-Encoding: 7BIT
+Content-Disposition: inline
+    
+""" % (from_email, recipient_email)
+
+    # Open and read in the file that contains the HTML formatted body of the message.
+    fh = open(message_body, 'r')
+    message_body = fh.read()
+    fh.close()
+
+    # The full message to send is the mail header plus the message body.
+    message = mail_header + message_body
+    if recipient_email is not None:
+        session = smtplib.SMTP(mail_server, 25)
+        print ('Sending to %s' % recipient_email)
+        try:
+            result = session.sendmail(from_email, recipient_email, message)
+            print("Sent OK\n")
+        except: 
+            print("Error sending email.\n")
+
+        session.quit()
+
+def main():
+
+    ##################################################
+    # Check program args and access to required files.
+    ##################################################
+
+    # I have replaced the default help's message with a clearer one.
+    parser = argparse.ArgumentParser(\
+        description='Check Your HPC Utilisation', \
+        usage="%(prog)s  running|finished|all  [-h] [-u USER] [-e EMAIL]", \
+        epilog='Contact Mike.Lake@uts.edu.au for further help.', \
+    ) 
+
+    parser.add_argument('state', choices=['running','finished','all'], default='running', \
+        help='Select one job state to report on.')
+    parser.add_argument('-u', '--user', help='Only show jobs for this user.')
+    parser.add_argument('-e', '--email', help='Email a copy of this report to yourself.')
+   
+    args = parser.parse_args()
+    state = args.state
+    user_id = args.user
+    recipient_email = args.email
+
+    # Check that we can access the HPC user database.
+    dirpath = os.path.dirname(sys.argv[0])
+    users_db_path = os.path.join(dirpath, users_db_name)
+    if not os.path.exists(users_db_path):
+        print ("The user database {} can\'t be found." .format(users_db_path))
+        print ("This program needs to be run from the same directory as the user database.") 
+        sys.exit() 
+
+    ##########################################################
+    # Connect to the PBS server and get the reqested job data.
+    # We also get some times which we need.
+    ##########################################################
+
+    time_past  = datetime.datetime.now() - datetime.timedelta(days=past_days)
+    epoch_past = int(datetime.datetime.timestamp(time_past))
+    time_start = datetime.datetime.fromtimestamp(epoch_past)
+        
+    print("\nChecking utilisation for jobs after", time_start.strftime('%Y-%m-%d %H:%M %p'))
+    if user_id is not None: 
+        print("Jobs limited to user", user_id)
+
+    conn = pbs.pbs_connect(pbs_server)
+
+    if state == 'running':
+        # This will get just current jobs; queued, running, and exiting.
+        # We have added the 't' to also include current array jobs.
+        jobs = get_jobs(conn, extend='t')
+        total = len(jobs)
+        if user_id is not None:
+            # Limit the jobs to just this user.
+            # We take the j['job_owner'] which is like u999777@hpcnode01 and 
+            # split it on the @ then take the first part.
+            jobs = [j for j in jobs if j['job_owner'].split('@')[0] == user_id]
+
+        # Only keep in the list jobs that are running.
+        # This will also remove array parent jobs as they are state 'B".
+        jobs = [j for j in jobs if j['job_state'] == 'R']
+        print('Found %d running jobs out of %d total jobs.' % (len(jobs), total))
+    elif state == 'finished':
+        # This will get ALL jobs, current and finished.
+        jobs = get_jobs(conn, extend='xt')
+        total = len(jobs)
+        if user_id is not None:
+            jobs = [j for j in jobs if j['job_owner'].split('@')[0] == user_id]
+
+        # Only keep in the list jobs that are finished.
+        jobs = [j for j in jobs if j['job_state'] == 'F']
+        print('Found %d finished jobs out of %d total jobs in PBS history.' % (len(jobs), total))
+
+        # Only keep in the list jobs that finished in the last n days.
+        # Some jobs appear to be missing an stime and etime so we cannot use the line below. 
+        #   jobs = [j for j in jobs if int(j['etime']) > epoch_past]
+        # This should work but does not because sometimes there is an etime but its ''. 
+        # Then the int of '' fails. 
+        #  jobs = [j for j in jobs if int(j.get('etime', 0)) > epoch_past]
+        # So we will use the for loop below.
+        jobs_tmp = []
+        for i in range(len(jobs)):
+            if 'etime' in jobs[i] and jobs[i]['etime']:
+                if int(jobs[i]['etime']) > epoch_past:
+                    jobs_tmp.append(jobs[i])
+
+        jobs = jobs_tmp
+        print('Found %d finished jobs from last %d days.' % (len(jobs), past_days))
+    elif state == 'all':
+        # This will get ALL jobs, current and finished.
+        jobs = get_jobs(conn, extend='xt')
+        total = len(jobs)
+        if user_id is not None:
+            jobs = [j for j in jobs if j['job_owner'].split('@')[0] == user_id]
+        
+        # Create two lists; jobs that are running and those that are finished.
+        jobs_running  = [j for j in jobs if j['job_state'] == 'R']
+        jobs_finished = [j for j in jobs if j['job_state'] == 'F']
+        print('Found %d running jobs and %d finished jobs, out of %d total jobs in PBS history.' % \
+            (len(jobs_running), len(jobs_finished), total))
+        
+        # Only keep in the finished list those finished in the last n days.
+        jobs_tmp = []
+        for i in range(len(jobs_finished)):
+            if 'etime' in jobs[i] and jobs[i]['etime']:
+                if int(jobs[i]['etime']) > epoch_past:
+                    jobs_tmp.append(jobs[i])
+        jobs_finished = jobs_tmp
+        print('Found %d finished jobs from last %d days.' % (len(jobs_finished), past_days))
+
+    else:
+        # We should never get here.
+        print("Invalid state %s" % state)
+        sys.exit()
+
+    pbs.pbs_disconnect(conn)
+
+    ######################################################
+    # Print the job data to the screen and as HTML output.
+    ######################################################
+
+    jobs = job_attributes_reformat(jobs)
+
+    # Sort by attribute.
+    # The sorted() function accepts a key function as an argument and calls it
+    # on each element prior to make comparison with other elements.
+    jobs = sorted(jobs, key=getKey)
+
+    # Write the jobs to a HTML formatted output file.
+    try: 
+        fh = open(html_output, 'w')
+    except:
+        print("I cannot create your HTML report. You are probably are running this script") 
+        print("from a localtion where you do not have permission to write to. Try running") 
+        print("this script from your home directory.")
+        sys.exit()
+    fh.write(prefix)
+
+    if state == 'running':
+        fh.write("<p>Running Jobs</p>")
+        fh.write(print_table_start())
+        print_jobs(jobs, fh)
+        fh.write(print_table_end())
+    elif state == 'finished':
+        fh.write("<p>Finished Jobs</p>")
+        fh.write(print_table_start())
+        print_jobs(jobs, fh)
+        fh.write(print_table_end())
+    elif state == 'all':
+        # Write the running jobs.
+        fh.write("<a name='finished'/>")
+        fh.write("<p><b>Finished Jobs</b> - Go to list of <a href='#running'>running jobs</a></p>")
+        print('\nFINISHED JOBS')
+        fh.write(print_table_start())
+        print_jobs(jobs_finished, fh)
+        fh.write(print_table_end())
+        # Write the finished jobs.
+        fh.write("<a name='running'/>")
+        fh.write("<p><b>Running Jobs</b> - Go to list of <a href='#finished'>finished jobs</a></p>")
+        print('\nRUNNING JOBS')
+        fh.write(print_table_start())
+        print_jobs(jobs_running, fh)
+        fh.write(print_table_end())
+    else:
+        # We should never get here.
+        print("Invalid state %s" % state)
+
+    fh.write(postfix)
+    fh.close()
+    print('\nWrote report %s ' % html_output) 
+
+    ###########################################################
+    # Show additional information underneath the table of jobs.
+    ###########################################################
+
+    # Get the login id of the user that is running this program.
+    # We can use either os.getuid() or os.geteuid() for effective uid.
+    this_user = pwd.getpwuid( os.getuid() ).pw_name
+
+    # For debugging you can uncomment this and add in a user 
+    # that has currently running jobs.
+    #this_user = 'uXXXXXX'
+
+    # Get the UTS email of the user that is running this program.
+    # Note that for admins with a local account this will be None.
+    this_user_email = get_user_email(users_db_path, this_user)
+    
+    invocation = ' '.join([str(s) for s in sys.argv])
+
+    # Now print this information to the screen.
+    print('\nTo rerun this, and email this report to yourself, run this command:')
+    if this_user_email is not None:
+        # We have an email for this user from the user database.
+        print('  %s -e %s' % (invocation, this_user_email))
+    else:
+        # We do not have an email, as this user was not found in the user database. 
+        # This means they are running this and logged in with a local account i.e.
+        # they are an admin user.
+        # The admin user can email themselves.
+        print('  %s -e your_email' % invocation)
+        if user_id is not None:
+            # The admin user can email this specific user.
+            user_id_email = get_user_email(users_db_path, user_id)
+            print('  %s -e %s' % (invocation, user_id_email))
+    print("")
+
+    ####################################################
+    # Email a copy of the data as a HTML formatted file.
+    ####################################################
+
+    if recipient_email is not None:
+        # An optional email address has been provided.
+        if this_user_email is not None:
+            # We have an email for this user from the user database.
+            if this_user_email == recipient_email:
+                send_email(from_email, this_user_email, html_output)
+            else:
+                print("The email was not sent. You can only send email to %s" % this_user_email)
+        else:
+            # The user running this script is an admin.
+            print("Sent email to admin %s" % recipient_email)
+            send_email(from_email, recipient_email, html_output)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/check_utilisation_install.sh b/check_utilisation_install.sh
new file mode 100644
index 0000000000000000000000000000000000000000..6b587ce41101553983b2c1fff5f7f134c7f19d0a
--- /dev/null
+++ b/check_utilisation_install.sh
@@ -0,0 +1,31 @@
+#!/bin/bash
+
+# Installs the check_utilisation.py Python script and its dependencies.
+# TODO This would be placed in /usr/local/bin so it can be run by users 
+# but it is still being worked on so its not installed for now.
+#
+# Usage: bash ./check_utilisation_install.sh
+
+dest="/opt/eresearch"
+
+# Check the public users database is up-to-date with the private one.
+num1=$(echo 'select count(id) from users;' | sqlite3 users_ldap.db)
+num2=$(echo 'select count(id) from users;' | sqlite3 users_ldap_public.db)
+
+if [ $num1 -ne $num2 ]; then
+    echo "Number of users in each database does not match ($num1 & $num2) so updating public database..."
+    ./users_ldap_public_create.sh
+else
+    echo "Number of users in each database is the same."
+fi
+
+# Now do the install.
+
+#mkdir -p ${dest}/pbs
+#cp pbs/pbs.py ${dest}/pbs
+#cp pbs/_pbs.so ${dest}/pbs
+#cp pbs/pbsutils.py ${dest}/pbs
+#cp check_utilisation.py ${dest}
+#cp users_ldap_public.db ${dest}
+#chmod ugo+x ${dest}/check_utilisation.py
+
diff --git a/pbs/pbs.py b/pbs/pbs.py
new file mode 100644
index 0000000000000000000000000000000000000000..3f4ba327c994953b6d5a538740cbfbba616b8d13
--- /dev/null
+++ b/pbs/pbs.py
@@ -0,0 +1,872 @@
+# This file was automatically generated by SWIG (http://www.swig.org).
+# Version 3.0.12
+#
+# Do not make changes to this file unless you know what you are doing--modify
+# the SWIG interface file instead.
+
+from sys import version_info as _swig_python_version_info
+if _swig_python_version_info >= (2, 7, 0):
+    def swig_import_helper():
+        import importlib
+        pkg = __name__.rpartition('.')[0]
+        mname = '.'.join((pkg, '_pbs')).lstrip('.')
+        try:
+            return importlib.import_module(mname)
+        except ImportError:
+            return importlib.import_module('_pbs')
+    _pbs = swig_import_helper()
+    del swig_import_helper
+elif _swig_python_version_info >= (2, 6, 0):
+    def swig_import_helper():
+        from os.path import dirname
+        import imp
+        fp = None
+        try:
+            fp, pathname, description = imp.find_module('_pbs', [dirname(__file__)])
+        except ImportError:
+            import _pbs
+            return _pbs
+        try:
+            _mod = imp.load_module('_pbs', fp, pathname, description)
+        finally:
+            if fp is not None:
+                fp.close()
+        return _mod
+    _pbs = swig_import_helper()
+    del swig_import_helper
+else:
+    import _pbs
+del _swig_python_version_info
+
+try:
+    _swig_property = property
+except NameError:
+    pass  # Python < 2.2 doesn't have 'property'.
+
+try:
+    import builtins as __builtin__
+except ImportError:
+    import __builtin__
+
+def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
+    if (name == "thisown"):
+        return self.this.own(value)
+    if (name == "this"):
+        if type(value).__name__ == 'SwigPyObject':
+            self.__dict__[name] = value
+            return
+    method = class_type.__swig_setmethods__.get(name, None)
+    if method:
+        return method(self, value)
+    if (not static):
+        if _newclass:
+            object.__setattr__(self, name, value)
+        else:
+            self.__dict__[name] = value
+    else:
+        raise AttributeError("You cannot add attributes to %s" % self)
+
+
+def _swig_setattr(self, class_type, name, value):
+    return _swig_setattr_nondynamic(self, class_type, name, value, 0)
+
+
+def _swig_getattr(self, class_type, name):
+    if (name == "thisown"):
+        return self.this.own()
+    method = class_type.__swig_getmethods__.get(name, None)
+    if method:
+        return method(self)
+    raise AttributeError("'%s' object has no attribute '%s'" % (class_type.__name__, name))
+
+
+def _swig_repr(self):
+    try:
+        strthis = "proxy of " + self.this.__repr__()
+    except __builtin__.Exception:
+        strthis = ""
+    return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
+
+try:
+    _object = object
+    _newclass = 1
+except __builtin__.Exception:
+    class _object:
+        pass
+    _newclass = 0
+
+TYPE_ATTR_READONLY = _pbs.TYPE_ATTR_READONLY
+TYPE_ATTR_PUBLIC = _pbs.TYPE_ATTR_PUBLIC
+TYPE_ATTR_INVISIBLE = _pbs.TYPE_ATTR_INVISIBLE
+TYPE_ATTR_ALL = _pbs.TYPE_ATTR_ALL
+ATTR_a = _pbs.ATTR_a
+ATTR_c = _pbs.ATTR_c
+ATTR_e = _pbs.ATTR_e
+ATTR_g = _pbs.ATTR_g
+ATTR_h = _pbs.ATTR_h
+ATTR_j = _pbs.ATTR_j
+ATTR_J = _pbs.ATTR_J
+ATTR_k = _pbs.ATTR_k
+ATTR_l = _pbs.ATTR_l
+ATTR_l_orig = _pbs.ATTR_l_orig
+ATTR_l_acct = _pbs.ATTR_l_acct
+ATTR_m = _pbs.ATTR_m
+ATTR_o = _pbs.ATTR_o
+ATTR_p = _pbs.ATTR_p
+ATTR_q = _pbs.ATTR_q
+ATTR_R = _pbs.ATTR_R
+ATTR_r = _pbs.ATTR_r
+ATTR_u = _pbs.ATTR_u
+ATTR_v = _pbs.ATTR_v
+ATTR_A = _pbs.ATTR_A
+ATTR_M = _pbs.ATTR_M
+ATTR_N = _pbs.ATTR_N
+ATTR_S = _pbs.ATTR_S
+ATTR_array_indices_submitted = _pbs.ATTR_array_indices_submitted
+ATTR_depend = _pbs.ATTR_depend
+ATTR_inter = _pbs.ATTR_inter
+ATTR_sandbox = _pbs.ATTR_sandbox
+ATTR_stagein = _pbs.ATTR_stagein
+ATTR_stageout = _pbs.ATTR_stageout
+ATTR_resvTag = _pbs.ATTR_resvTag
+ATTR_resv_start = _pbs.ATTR_resv_start
+ATTR_resv_end = _pbs.ATTR_resv_end
+ATTR_resv_duration = _pbs.ATTR_resv_duration
+ATTR_resv_state = _pbs.ATTR_resv_state
+ATTR_resv_substate = _pbs.ATTR_resv_substate
+ATTR_resv_job = _pbs.ATTR_resv_job
+ATTR_auth_u = _pbs.ATTR_auth_u
+ATTR_auth_g = _pbs.ATTR_auth_g
+ATTR_auth_h = _pbs.ATTR_auth_h
+ATTR_cred = _pbs.ATTR_cred
+ATTR_nodemux = _pbs.ATTR_nodemux
+ATTR_umask = _pbs.ATTR_umask
+ATTR_block = _pbs.ATTR_block
+ATTR_convert = _pbs.ATTR_convert
+ATTR_DefaultChunk = _pbs.ATTR_DefaultChunk
+ATTR_X11_cookie = _pbs.ATTR_X11_cookie
+ATTR_X11_port = _pbs.ATTR_X11_port
+ATTR_GUI = _pbs.ATTR_GUI
+ATTR_resv_standing = _pbs.ATTR_resv_standing
+ATTR_resv_count = _pbs.ATTR_resv_count
+ATTR_resv_idx = _pbs.ATTR_resv_idx
+ATTR_resv_rrule = _pbs.ATTR_resv_rrule
+ATTR_resv_execvnodes = _pbs.ATTR_resv_execvnodes
+ATTR_resv_timezone = _pbs.ATTR_resv_timezone
+ATTR_ctime = _pbs.ATTR_ctime
+ATTR_estimated = _pbs.ATTR_estimated
+ATTR_exechost = _pbs.ATTR_exechost
+ATTR_exechost_acct = _pbs.ATTR_exechost_acct
+ATTR_exechost_orig = _pbs.ATTR_exechost_orig
+ATTR_exechost2 = _pbs.ATTR_exechost2
+ATTR_execvnode = _pbs.ATTR_execvnode
+ATTR_execvnode_acct = _pbs.ATTR_execvnode_acct
+ATTR_execvnode_deallocated = _pbs.ATTR_execvnode_deallocated
+ATTR_execvnode_orig = _pbs.ATTR_execvnode_orig
+ATTR_resv_nodes = _pbs.ATTR_resv_nodes
+ATTR_mtime = _pbs.ATTR_mtime
+ATTR_qtime = _pbs.ATTR_qtime
+ATTR_session = _pbs.ATTR_session
+ATTR_jobdir = _pbs.ATTR_jobdir
+ATTR_euser = _pbs.ATTR_euser
+ATTR_egroup = _pbs.ATTR_egroup
+ATTR_project = _pbs.ATTR_project
+ATTR_hashname = _pbs.ATTR_hashname
+ATTR_hopcount = _pbs.ATTR_hopcount
+ATTR_security = _pbs.ATTR_security
+ATTR_sched_hint = _pbs.ATTR_sched_hint
+ATTR_SchedSelect = _pbs.ATTR_SchedSelect
+ATTR_SchedSelect_orig = _pbs.ATTR_SchedSelect_orig
+ATTR_substate = _pbs.ATTR_substate
+ATTR_name = _pbs.ATTR_name
+ATTR_owner = _pbs.ATTR_owner
+ATTR_used = _pbs.ATTR_used
+ATTR_used_acct = _pbs.ATTR_used_acct
+ATTR_used_update = _pbs.ATTR_used_update
+ATTR_relnodes_on_stageout = _pbs.ATTR_relnodes_on_stageout
+ATTR_tolerate_node_failures = _pbs.ATTR_tolerate_node_failures
+ATTR_released = _pbs.ATTR_released
+ATTR_rel_list = _pbs.ATTR_rel_list
+ATTR_state = _pbs.ATTR_state
+ATTR_queue = _pbs.ATTR_queue
+ATTR_server = _pbs.ATTR_server
+ATTR_maxrun = _pbs.ATTR_maxrun
+ATTR_max_run = _pbs.ATTR_max_run
+ATTR_max_run_res = _pbs.ATTR_max_run_res
+ATTR_max_run_soft = _pbs.ATTR_max_run_soft
+ATTR_max_run_res_soft = _pbs.ATTR_max_run_res_soft
+ATTR_total = _pbs.ATTR_total
+ATTR_comment = _pbs.ATTR_comment
+ATTR_cookie = _pbs.ATTR_cookie
+ATTR_qrank = _pbs.ATTR_qrank
+ATTR_altid = _pbs.ATTR_altid
+ATTR_altid2 = _pbs.ATTR_altid2
+ATTR_acct_id = _pbs.ATTR_acct_id
+ATTR_array = _pbs.ATTR_array
+ATTR_array_id = _pbs.ATTR_array_id
+ATTR_array_index = _pbs.ATTR_array_index
+ATTR_array_state_count = _pbs.ATTR_array_state_count
+ATTR_array_indices_remaining = _pbs.ATTR_array_indices_remaining
+ATTR_etime = _pbs.ATTR_etime
+ATTR_gridname = _pbs.ATTR_gridname
+ATTR_refresh = _pbs.ATTR_refresh
+ATTR_ReqCredEnable = _pbs.ATTR_ReqCredEnable
+ATTR_ReqCred = _pbs.ATTR_ReqCred
+ATTR_runcount = _pbs.ATTR_runcount
+ATTR_run_version = _pbs.ATTR_run_version
+ATTR_stime = _pbs.ATTR_stime
+ATTR_pset = _pbs.ATTR_pset
+ATTR_executable = _pbs.ATTR_executable
+ATTR_Arglist = _pbs.ATTR_Arglist
+ATTR_version = _pbs.ATTR_version
+ATTR_eligible_time = _pbs.ATTR_eligible_time
+ATTR_accrue_type = _pbs.ATTR_accrue_type
+ATTR_sample_starttime = _pbs.ATTR_sample_starttime
+ATTR_job_kill_delay = _pbs.ATTR_job_kill_delay
+ATTR_topjob_ineligible = _pbs.ATTR_topjob_ineligible
+ATTR_submit_host = _pbs.ATTR_submit_host
+ATTR_cred_id = _pbs.ATTR_cred_id
+ATTR_cred_validity = _pbs.ATTR_cred_validity
+ATTR_history_timestamp = _pbs.ATTR_history_timestamp
+ATTR_create_resv_from_job = _pbs.ATTR_create_resv_from_job
+ATTR_stageout_status = _pbs.ATTR_stageout_status
+ATTR_exit_status = _pbs.ATTR_exit_status
+ATTR_submit_arguments = _pbs.ATTR_submit_arguments
+ATTR_resv_name = _pbs.ATTR_resv_name
+ATTR_resv_owner = _pbs.ATTR_resv_owner
+ATTR_resv_type = _pbs.ATTR_resv_type
+ATTR_resv_Tag = _pbs.ATTR_resv_Tag
+ATTR_resv_ID = _pbs.ATTR_resv_ID
+ATTR_resv_retry = _pbs.ATTR_resv_retry
+ATTR_del_idle_time = _pbs.ATTR_del_idle_time
+ATTR_aclgren = _pbs.ATTR_aclgren
+ATTR_aclgroup = _pbs.ATTR_aclgroup
+ATTR_aclhten = _pbs.ATTR_aclhten
+ATTR_aclhost = _pbs.ATTR_aclhost
+ATTR_aclhostmomsen = _pbs.ATTR_aclhostmomsen
+ATTR_acluren = _pbs.ATTR_acluren
+ATTR_acluser = _pbs.ATTR_acluser
+ATTR_altrouter = _pbs.ATTR_altrouter
+ATTR_chkptmin = _pbs.ATTR_chkptmin
+ATTR_enable = _pbs.ATTR_enable
+ATTR_fromroute = _pbs.ATTR_fromroute
+ATTR_HasNodes = _pbs.ATTR_HasNodes
+ATTR_killdelay = _pbs.ATTR_killdelay
+ATTR_maxgrprun = _pbs.ATTR_maxgrprun
+ATTR_maxgrprunsoft = _pbs.ATTR_maxgrprunsoft
+ATTR_maxque = _pbs.ATTR_maxque
+ATTR_max_queued = _pbs.ATTR_max_queued
+ATTR_max_queued_res = _pbs.ATTR_max_queued_res
+ATTR_queued_jobs_threshold = _pbs.ATTR_queued_jobs_threshold
+ATTR_queued_jobs_threshold_res = _pbs.ATTR_queued_jobs_threshold_res
+ATTR_maxuserrun = _pbs.ATTR_maxuserrun
+ATTR_maxuserrunsoft = _pbs.ATTR_maxuserrunsoft
+ATTR_qtype = _pbs.ATTR_qtype
+ATTR_rescassn = _pbs.ATTR_rescassn
+ATTR_rescdflt = _pbs.ATTR_rescdflt
+ATTR_rescmax = _pbs.ATTR_rescmax
+ATTR_rescmin = _pbs.ATTR_rescmin
+ATTR_rndzretry = _pbs.ATTR_rndzretry
+ATTR_routedest = _pbs.ATTR_routedest
+ATTR_routeheld = _pbs.ATTR_routeheld
+ATTR_routewait = _pbs.ATTR_routewait
+ATTR_routeretry = _pbs.ATTR_routeretry
+ATTR_routelife = _pbs.ATTR_routelife
+ATTR_rsvexpdt = _pbs.ATTR_rsvexpdt
+ATTR_rsvsync = _pbs.ATTR_rsvsync
+ATTR_start = _pbs.ATTR_start
+ATTR_count = _pbs.ATTR_count
+ATTR_number = _pbs.ATTR_number
+ATTR_jobscript_max_size = _pbs.ATTR_jobscript_max_size
+ATTR_SvrHost = _pbs.ATTR_SvrHost
+ATTR_aclroot = _pbs.ATTR_aclroot
+ATTR_managers = _pbs.ATTR_managers
+ATTR_dfltque = _pbs.ATTR_dfltque
+ATTR_defnode = _pbs.ATTR_defnode
+ATTR_locsvrs = _pbs.ATTR_locsvrs
+ATTR_logevents = _pbs.ATTR_logevents
+ATTR_logfile = _pbs.ATTR_logfile
+ATTR_mailfrom = _pbs.ATTR_mailfrom
+ATTR_nodepack = _pbs.ATTR_nodepack
+ATTR_nodefailrq = _pbs.ATTR_nodefailrq
+ATTR_operators = _pbs.ATTR_operators
+ATTR_queryother = _pbs.ATTR_queryother
+ATTR_resccost = _pbs.ATTR_resccost
+ATTR_rescavail = _pbs.ATTR_rescavail
+ATTR_maxuserres = _pbs.ATTR_maxuserres
+ATTR_maxuserressoft = _pbs.ATTR_maxuserressoft
+ATTR_maxgroupres = _pbs.ATTR_maxgroupres
+ATTR_maxgroupressoft = _pbs.ATTR_maxgroupressoft
+ATTR_maxarraysize = _pbs.ATTR_maxarraysize
+ATTR_PNames = _pbs.ATTR_PNames
+ATTR_schediteration = _pbs.ATTR_schediteration
+ATTR_scheduling = _pbs.ATTR_scheduling
+ATTR_status = _pbs.ATTR_status
+ATTR_syscost = _pbs.ATTR_syscost
+ATTR_FlatUID = _pbs.ATTR_FlatUID
+ATTR_FLicenses = _pbs.ATTR_FLicenses
+ATTR_ResvEnable = _pbs.ATTR_ResvEnable
+ATTR_aclResvgren = _pbs.ATTR_aclResvgren
+ATTR_aclResvgroup = _pbs.ATTR_aclResvgroup
+ATTR_aclResvhten = _pbs.ATTR_aclResvhten
+ATTR_aclResvhost = _pbs.ATTR_aclResvhost
+ATTR_aclResvuren = _pbs.ATTR_aclResvuren
+ATTR_aclResvuser = _pbs.ATTR_aclResvuser
+ATTR_NodeGroupEnable = _pbs.ATTR_NodeGroupEnable
+ATTR_NodeGroupKey = _pbs.ATTR_NodeGroupKey
+ATTR_dfltqdelargs = _pbs.ATTR_dfltqdelargs
+ATTR_dfltqsubargs = _pbs.ATTR_dfltqsubargs
+ATTR_rpp_retry = _pbs.ATTR_rpp_retry
+ATTR_rpp_highwater = _pbs.ATTR_rpp_highwater
+ATTR_pbs_license_info = _pbs.ATTR_pbs_license_info
+ATTR_license_min = _pbs.ATTR_license_min
+ATTR_license_max = _pbs.ATTR_license_max
+ATTR_license_linger = _pbs.ATTR_license_linger
+ATTR_license_count = _pbs.ATTR_license_count
+ATTR_job_sort_formula = _pbs.ATTR_job_sort_formula
+ATTR_EligibleTimeEnable = _pbs.ATTR_EligibleTimeEnable
+ATTR_resv_retry_time = _pbs.ATTR_resv_retry_time
+ATTR_resv_retry_init = _pbs.ATTR_resv_retry_init
+ATTR_JobHistoryEnable = _pbs.ATTR_JobHistoryEnable
+ATTR_JobHistoryDuration = _pbs.ATTR_JobHistoryDuration
+ATTR_max_concurrent_prov = _pbs.ATTR_max_concurrent_prov
+ATTR_resv_post_processing = _pbs.ATTR_resv_post_processing
+ATTR_backfill_depth = _pbs.ATTR_backfill_depth
+ATTR_job_requeue_timeout = _pbs.ATTR_job_requeue_timeout
+ATTR_show_hidden_attribs = _pbs.ATTR_show_hidden_attribs
+ATTR_python_restart_max_hooks = _pbs.ATTR_python_restart_max_hooks
+ATTR_python_restart_max_objects = _pbs.ATTR_python_restart_max_objects
+ATTR_python_restart_min_interval = _pbs.ATTR_python_restart_min_interval
+ATTR_power_provisioning = _pbs.ATTR_power_provisioning
+ATTR_sync_mom_hookfiles_timeout = _pbs.ATTR_sync_mom_hookfiles_timeout
+ATTR_max_job_sequence_id = _pbs.ATTR_max_job_sequence_id
+ATTR_acl_krb_realm_enable = _pbs.ATTR_acl_krb_realm_enable
+ATTR_acl_krb_realms = _pbs.ATTR_acl_krb_realms
+ATTR_acl_krb_submit_realms = _pbs.ATTR_acl_krb_submit_realms
+ATTR_cred_renew_enable = _pbs.ATTR_cred_renew_enable
+ATTR_cred_renew_tool = _pbs.ATTR_cred_renew_tool
+ATTR_cred_renew_period = _pbs.ATTR_cred_renew_period
+ATTR_cred_renew_cache_period = _pbs.ATTR_cred_renew_cache_period
+ATTR_rpp_max_pkt_check = _pbs.ATTR_rpp_max_pkt_check
+ATTR_SchedHost = _pbs.ATTR_SchedHost
+ATTR_sched_cycle_len = _pbs.ATTR_sched_cycle_len
+ATTR_do_not_span_psets = _pbs.ATTR_do_not_span_psets
+ATTR_only_explicit_psets = _pbs.ATTR_only_explicit_psets
+ATTR_sched_preempt_enforce_resumption = _pbs.ATTR_sched_preempt_enforce_resumption
+ATTR_preempt_targets_enable = _pbs.ATTR_preempt_targets_enable
+ATTR_job_sort_formula_threshold = _pbs.ATTR_job_sort_formula_threshold
+ATTR_throughput_mode = _pbs.ATTR_throughput_mode
+ATTR_opt_backfill_fuzzy = _pbs.ATTR_opt_backfill_fuzzy
+ATTR_sched_port = _pbs.ATTR_sched_port
+ATTR_partition = _pbs.ATTR_partition
+ATTR_sched_priv = _pbs.ATTR_sched_priv
+ATTR_sched_log = _pbs.ATTR_sched_log
+ATTR_sched_user = _pbs.ATTR_sched_user
+ATTR_sched_state = _pbs.ATTR_sched_state
+ATTR_sched_preempt_queue_prio = _pbs.ATTR_sched_preempt_queue_prio
+ATTR_sched_preempt_prio = _pbs.ATTR_sched_preempt_prio
+ATTR_sched_preempt_order = _pbs.ATTR_sched_preempt_order
+ATTR_sched_preempt_sort = _pbs.ATTR_sched_preempt_sort
+ATTR_sched_server_dyn_res_alarm = _pbs.ATTR_sched_server_dyn_res_alarm
+ATTR_NODE_Host = _pbs.ATTR_NODE_Host
+ATTR_NODE_Mom = _pbs.ATTR_NODE_Mom
+ATTR_NODE_Port = _pbs.ATTR_NODE_Port
+ATTR_NODE_state = _pbs.ATTR_NODE_state
+ATTR_NODE_ntype = _pbs.ATTR_NODE_ntype
+ATTR_NODE_jobs = _pbs.ATTR_NODE_jobs
+ATTR_NODE_resvs = _pbs.ATTR_NODE_resvs
+ATTR_NODE_resv_enable = _pbs.ATTR_NODE_resv_enable
+ATTR_NODE_np = _pbs.ATTR_NODE_np
+ATTR_NODE_pcpus = _pbs.ATTR_NODE_pcpus
+ATTR_NODE_properties = _pbs.ATTR_NODE_properties
+ATTR_NODE_NoMultiNode = _pbs.ATTR_NODE_NoMultiNode
+ATTR_NODE_No_Tasks = _pbs.ATTR_NODE_No_Tasks
+ATTR_NODE_Sharing = _pbs.ATTR_NODE_Sharing
+ATTR_NODE_ProvisionEnable = _pbs.ATTR_NODE_ProvisionEnable
+ATTR_NODE_current_aoe = _pbs.ATTR_NODE_current_aoe
+ATTR_NODE_in_multivnode_host = _pbs.ATTR_NODE_in_multivnode_host
+ATTR_NODE_License = _pbs.ATTR_NODE_License
+ATTR_NODE_LicenseInfo = _pbs.ATTR_NODE_LicenseInfo
+ATTR_NODE_TopologyInfo = _pbs.ATTR_NODE_TopologyInfo
+ATTR_NODE_MaintJobs = _pbs.ATTR_NODE_MaintJobs
+ATTR_NODE_VnodePool = _pbs.ATTR_NODE_VnodePool
+ATTR_NODE_current_eoe = _pbs.ATTR_NODE_current_eoe
+ATTR_NODE_power_provisioning = _pbs.ATTR_NODE_power_provisioning
+ATTR_NODE_poweroff_eligible = _pbs.ATTR_NODE_poweroff_eligible
+ATTR_NODE_last_state_change_time = _pbs.ATTR_NODE_last_state_change_time
+ATTR_NODE_last_used_time = _pbs.ATTR_NODE_last_used_time
+ND_RESC_LicSignature = _pbs.ND_RESC_LicSignature
+ATTR_RESC_TYPE = _pbs.ATTR_RESC_TYPE
+ATTR_RESC_FLAG = _pbs.ATTR_RESC_FLAG
+CHECKPOINT_UNSPECIFIED = _pbs.CHECKPOINT_UNSPECIFIED
+NO_HOLD = _pbs.NO_HOLD
+NO_JOIN = _pbs.NO_JOIN
+NO_KEEP = _pbs.NO_KEEP
+MAIL_AT_ABORT = _pbs.MAIL_AT_ABORT
+USER_HOLD = _pbs.USER_HOLD
+OTHER_HOLD = _pbs.OTHER_HOLD
+SYSTEM_HOLD = _pbs.SYSTEM_HOLD
+BAD_PASSWORD_HOLD = _pbs.BAD_PASSWORD_HOLD
+MGR_CMD_NONE = _pbs.MGR_CMD_NONE
+MGR_CMD_CREATE = _pbs.MGR_CMD_CREATE
+MGR_CMD_DELETE = _pbs.MGR_CMD_DELETE
+MGR_CMD_SET = _pbs.MGR_CMD_SET
+MGR_CMD_UNSET = _pbs.MGR_CMD_UNSET
+MGR_CMD_LIST = _pbs.MGR_CMD_LIST
+MGR_CMD_PRINT = _pbs.MGR_CMD_PRINT
+MGR_CMD_ACTIVE = _pbs.MGR_CMD_ACTIVE
+MGR_CMD_IMPORT = _pbs.MGR_CMD_IMPORT
+MGR_CMD_EXPORT = _pbs.MGR_CMD_EXPORT
+MGR_CMD_LAST = _pbs.MGR_CMD_LAST
+MGR_OBJ_NONE = _pbs.MGR_OBJ_NONE
+MGR_OBJ_SERVER = _pbs.MGR_OBJ_SERVER
+MGR_OBJ_QUEUE = _pbs.MGR_OBJ_QUEUE
+MGR_OBJ_JOB = _pbs.MGR_OBJ_JOB
+MGR_OBJ_NODE = _pbs.MGR_OBJ_NODE
+MGR_OBJ_RESV = _pbs.MGR_OBJ_RESV
+MGR_OBJ_RSC = _pbs.MGR_OBJ_RSC
+MGR_OBJ_SCHED = _pbs.MGR_OBJ_SCHED
+MGR_OBJ_HOST = _pbs.MGR_OBJ_HOST
+MGR_OBJ_HOOK = _pbs.MGR_OBJ_HOOK
+MGR_OBJ_PBS_HOOK = _pbs.MGR_OBJ_PBS_HOOK
+MGR_OBJ_LAST = _pbs.MGR_OBJ_LAST
+SITE_HOOK = _pbs.SITE_HOOK
+PBS_HOOK = _pbs.PBS_HOOK
+MSG_OUT = _pbs.MSG_OUT
+MSG_ERR = _pbs.MSG_ERR
+BLUEGENE = _pbs.BLUEGENE
+PBS_MAXHOSTNAME = _pbs.PBS_MAXHOSTNAME
+MAXPATHLEN = _pbs.MAXPATHLEN
+MAXNAMLEN = _pbs.MAXNAMLEN
+PBS_MAXSCHEDNAME = _pbs.PBS_MAXSCHEDNAME
+PBS_MAXUSER = _pbs.PBS_MAXUSER
+PBS_MAXPWLEN = _pbs.PBS_MAXPWLEN
+PBS_MAXGRPN = _pbs.PBS_MAXGRPN
+PBS_MAXQUEUENAME = _pbs.PBS_MAXQUEUENAME
+PBS_MAXJOBNAME = _pbs.PBS_MAXJOBNAME
+PBS_MAXSERVERNAME = _pbs.PBS_MAXSERVERNAME
+PBS_MAXSEQNUM = _pbs.PBS_MAXSEQNUM
+PBS_DFLT_MAX_JOB_SEQUENCE_ID = _pbs.PBS_DFLT_MAX_JOB_SEQUENCE_ID
+PBS_MAXPORTNUM = _pbs.PBS_MAXPORTNUM
+PBS_MAXSVRJOBID = _pbs.PBS_MAXSVRJOBID
+PBS_MAXSVRRESVID = _pbs.PBS_MAXSVRRESVID
+PBS_MAXQRESVNAME = _pbs.PBS_MAXQRESVNAME
+PBS_MAXCLTJOBID = _pbs.PBS_MAXCLTJOBID
+PBS_MAXDEST = _pbs.PBS_MAXDEST
+PBS_MAXROUTEDEST = _pbs.PBS_MAXROUTEDEST
+PBS_INTERACTIVE = _pbs.PBS_INTERACTIVE
+PBS_TERM_BUF_SZ = _pbs.PBS_TERM_BUF_SZ
+PBS_TERM_CCA = _pbs.PBS_TERM_CCA
+PBS_RESV_ID_CHAR = _pbs.PBS_RESV_ID_CHAR
+PBS_STDNG_RESV_ID_CHAR = _pbs.PBS_STDNG_RESV_ID_CHAR
+PBS_MNTNC_RESV_ID_CHAR = _pbs.PBS_MNTNC_RESV_ID_CHAR
+PBS_AUTH_KEY_LEN = _pbs.PBS_AUTH_KEY_LEN
+SET = _pbs.SET
+UNSET = _pbs.UNSET
+INCR = _pbs.INCR
+DECR = _pbs.DECR
+EQ = _pbs.EQ
+NE = _pbs.NE
+GE = _pbs.GE
+GT = _pbs.GT
+LE = _pbs.LE
+LT = _pbs.LT
+DFLT = _pbs.DFLT
+SHUT_IMMEDIATE = _pbs.SHUT_IMMEDIATE
+SHUT_DELAY = _pbs.SHUT_DELAY
+SHUT_QUICK = _pbs.SHUT_QUICK
+FORCEDEL = _pbs.FORCEDEL
+NOMAIL = _pbs.NOMAIL
+SUPPRESS_EMAIL = _pbs.SUPPRESS_EMAIL
+DELETEHISTORY = _pbs.DELETEHISTORY
+class attrl(_object):
+    __swig_setmethods__ = {}
+    __setattr__ = lambda self, name, value: _swig_setattr(self, attrl, name, value)
+    __swig_getmethods__ = {}
+    __getattr__ = lambda self, name: _swig_getattr(self, attrl, name)
+    __repr__ = _swig_repr
+    __swig_setmethods__["next"] = _pbs.attrl_next_set
+    __swig_getmethods__["next"] = _pbs.attrl_next_get
+    if _newclass:
+        next = _swig_property(_pbs.attrl_next_get, _pbs.attrl_next_set)
+    __swig_setmethods__["name"] = _pbs.attrl_name_set
+    __swig_getmethods__["name"] = _pbs.attrl_name_get
+    if _newclass:
+        name = _swig_property(_pbs.attrl_name_get, _pbs.attrl_name_set)
+    __swig_setmethods__["resource"] = _pbs.attrl_resource_set
+    __swig_getmethods__["resource"] = _pbs.attrl_resource_get
+    if _newclass:
+        resource = _swig_property(_pbs.attrl_resource_get, _pbs.attrl_resource_set)
+    __swig_setmethods__["value"] = _pbs.attrl_value_set
+    __swig_getmethods__["value"] = _pbs.attrl_value_get
+    if _newclass:
+        value = _swig_property(_pbs.attrl_value_get, _pbs.attrl_value_set)
+    __swig_setmethods__["op"] = _pbs.attrl_op_set
+    __swig_getmethods__["op"] = _pbs.attrl_op_get
+    if _newclass:
+        op = _swig_property(_pbs.attrl_op_get, _pbs.attrl_op_set)
+
+    def __init__(self):
+        this = _pbs.new_attrl()
+        try:
+            self.this.append(this)
+        except __builtin__.Exception:
+            self.this = this
+    __swig_destroy__ = _pbs.delete_attrl
+    __del__ = lambda self: None
+attrl_swigregister = _pbs.attrl_swigregister
+attrl_swigregister(attrl)
+
+class attropl(_object):
+    __swig_setmethods__ = {}
+    __setattr__ = lambda self, name, value: _swig_setattr(self, attropl, name, value)
+    __swig_getmethods__ = {}
+    __getattr__ = lambda self, name: _swig_getattr(self, attropl, name)
+    __repr__ = _swig_repr
+    __swig_setmethods__["next"] = _pbs.attropl_next_set
+    __swig_getmethods__["next"] = _pbs.attropl_next_get
+    if _newclass:
+        next = _swig_property(_pbs.attropl_next_get, _pbs.attropl_next_set)
+    __swig_setmethods__["name"] = _pbs.attropl_name_set
+    __swig_getmethods__["name"] = _pbs.attropl_name_get
+    if _newclass:
+        name = _swig_property(_pbs.attropl_name_get, _pbs.attropl_name_set)
+    __swig_setmethods__["resource"] = _pbs.attropl_resource_set
+    __swig_getmethods__["resource"] = _pbs.attropl_resource_get
+    if _newclass:
+        resource = _swig_property(_pbs.attropl_resource_get, _pbs.attropl_resource_set)
+    __swig_setmethods__["value"] = _pbs.attropl_value_set
+    __swig_getmethods__["value"] = _pbs.attropl_value_get
+    if _newclass:
+        value = _swig_property(_pbs.attropl_value_get, _pbs.attropl_value_set)
+    __swig_setmethods__["op"] = _pbs.attropl_op_set
+    __swig_getmethods__["op"] = _pbs.attropl_op_get
+    if _newclass:
+        op = _swig_property(_pbs.attropl_op_get, _pbs.attropl_op_set)
+
+    def __init__(self):
+        this = _pbs.new_attropl()
+        try:
+            self.this.append(this)
+        except __builtin__.Exception:
+            self.this = this
+    __swig_destroy__ = _pbs.delete_attropl
+    __del__ = lambda self: None
+attropl_swigregister = _pbs.attropl_swigregister
+attropl_swigregister(attropl)
+
+class batch_status(_object):
+    __swig_setmethods__ = {}
+    __setattr__ = lambda self, name, value: _swig_setattr(self, batch_status, name, value)
+    __swig_getmethods__ = {}
+    __getattr__ = lambda self, name: _swig_getattr(self, batch_status, name)
+    __repr__ = _swig_repr
+    __swig_setmethods__["next"] = _pbs.batch_status_next_set
+    __swig_getmethods__["next"] = _pbs.batch_status_next_get
+    if _newclass:
+        next = _swig_property(_pbs.batch_status_next_get, _pbs.batch_status_next_set)
+    __swig_setmethods__["name"] = _pbs.batch_status_name_set
+    __swig_getmethods__["name"] = _pbs.batch_status_name_get
+    if _newclass:
+        name = _swig_property(_pbs.batch_status_name_get, _pbs.batch_status_name_set)
+    __swig_setmethods__["attribs"] = _pbs.batch_status_attribs_set
+    __swig_getmethods__["attribs"] = _pbs.batch_status_attribs_get
+    if _newclass:
+        attribs = _swig_property(_pbs.batch_status_attribs_get, _pbs.batch_status_attribs_set)
+    __swig_setmethods__["text"] = _pbs.batch_status_text_set
+    __swig_getmethods__["text"] = _pbs.batch_status_text_get
+    if _newclass:
+        text = _swig_property(_pbs.batch_status_text_get, _pbs.batch_status_text_set)
+
+    def __init__(self):
+        this = _pbs.new_batch_status()
+        try:
+            self.this.append(this)
+        except __builtin__.Exception:
+            self.this = this
+    __swig_destroy__ = _pbs.delete_batch_status
+    __del__ = lambda self: None
+batch_status_swigregister = _pbs.batch_status_swigregister
+batch_status_swigregister(batch_status)
+
+class ecl_attrerr(_object):
+    __swig_setmethods__ = {}
+    __setattr__ = lambda self, name, value: _swig_setattr(self, ecl_attrerr, name, value)
+    __swig_getmethods__ = {}
+    __getattr__ = lambda self, name: _swig_getattr(self, ecl_attrerr, name)
+    __repr__ = _swig_repr
+    __swig_setmethods__["ecl_attribute"] = _pbs.ecl_attrerr_ecl_attribute_set
+    __swig_getmethods__["ecl_attribute"] = _pbs.ecl_attrerr_ecl_attribute_get
+    if _newclass:
+        ecl_attribute = _swig_property(_pbs.ecl_attrerr_ecl_attribute_get, _pbs.ecl_attrerr_ecl_attribute_set)
+    __swig_setmethods__["ecl_errcode"] = _pbs.ecl_attrerr_ecl_errcode_set
+    __swig_getmethods__["ecl_errcode"] = _pbs.ecl_attrerr_ecl_errcode_get
+    if _newclass:
+        ecl_errcode = _swig_property(_pbs.ecl_attrerr_ecl_errcode_get, _pbs.ecl_attrerr_ecl_errcode_set)
+    __swig_setmethods__["ecl_errmsg"] = _pbs.ecl_attrerr_ecl_errmsg_set
+    __swig_getmethods__["ecl_errmsg"] = _pbs.ecl_attrerr_ecl_errmsg_get
+    if _newclass:
+        ecl_errmsg = _swig_property(_pbs.ecl_attrerr_ecl_errmsg_get, _pbs.ecl_attrerr_ecl_errmsg_set)
+
+    def __init__(self):
+        this = _pbs.new_ecl_attrerr()
+        try:
+            self.this.append(this)
+        except __builtin__.Exception:
+            self.this = this
+    __swig_destroy__ = _pbs.delete_ecl_attrerr
+    __del__ = lambda self: None
+ecl_attrerr_swigregister = _pbs.ecl_attrerr_swigregister
+ecl_attrerr_swigregister(ecl_attrerr)
+
+class ecl_attribute_errors(_object):
+    __swig_setmethods__ = {}
+    __setattr__ = lambda self, name, value: _swig_setattr(self, ecl_attribute_errors, name, value)
+    __swig_getmethods__ = {}
+    __getattr__ = lambda self, name: _swig_getattr(self, ecl_attribute_errors, name)
+    __repr__ = _swig_repr
+    __swig_setmethods__["ecl_numerrors"] = _pbs.ecl_attribute_errors_ecl_numerrors_set
+    __swig_getmethods__["ecl_numerrors"] = _pbs.ecl_attribute_errors_ecl_numerrors_get
+    if _newclass:
+        ecl_numerrors = _swig_property(_pbs.ecl_attribute_errors_ecl_numerrors_get, _pbs.ecl_attribute_errors_ecl_numerrors_set)
+    __swig_setmethods__["ecl_attrerr"] = _pbs.ecl_attribute_errors_ecl_attrerr_set
+    __swig_getmethods__["ecl_attrerr"] = _pbs.ecl_attribute_errors_ecl_attrerr_get
+    if _newclass:
+        ecl_attrerr = _swig_property(_pbs.ecl_attribute_errors_ecl_attrerr_get, _pbs.ecl_attribute_errors_ecl_attrerr_set)
+
+    def __init__(self):
+        this = _pbs.new_ecl_attribute_errors()
+        try:
+            self.this.append(this)
+        except __builtin__.Exception:
+            self.this = this
+    __swig_destroy__ = _pbs.delete_ecl_attribute_errors
+    __del__ = lambda self: None
+ecl_attribute_errors_swigregister = _pbs.ecl_attribute_errors_swigregister
+ecl_attribute_errors_swigregister(ecl_attribute_errors)
+
+PREEMPT_METHOD_LOW = _pbs.PREEMPT_METHOD_LOW
+PREEMPT_METHOD_SUSPEND = _pbs.PREEMPT_METHOD_SUSPEND
+PREEMPT_METHOD_CHECKPOINT = _pbs.PREEMPT_METHOD_CHECKPOINT
+PREEMPT_METHOD_REQUEUE = _pbs.PREEMPT_METHOD_REQUEUE
+PREEMPT_METHOD_DELETE = _pbs.PREEMPT_METHOD_DELETE
+PREEMPT_METHOD_HIGH = _pbs.PREEMPT_METHOD_HIGH
+class preempt_job_info(_object):
+    __swig_setmethods__ = {}
+    __setattr__ = lambda self, name, value: _swig_setattr(self, preempt_job_info, name, value)
+    __swig_getmethods__ = {}
+    __getattr__ = lambda self, name: _swig_getattr(self, preempt_job_info, name)
+    __repr__ = _swig_repr
+    __swig_setmethods__["job_id"] = _pbs.preempt_job_info_job_id_set
+    __swig_getmethods__["job_id"] = _pbs.preempt_job_info_job_id_get
+    if _newclass:
+        job_id = _swig_property(_pbs.preempt_job_info_job_id_get, _pbs.preempt_job_info_job_id_set)
+    __swig_setmethods__["order"] = _pbs.preempt_job_info_order_set
+    __swig_getmethods__["order"] = _pbs.preempt_job_info_order_get
+    if _newclass:
+        order = _swig_property(_pbs.preempt_job_info_order_get, _pbs.preempt_job_info_order_set)
+
+    def __init__(self):
+        this = _pbs.new_preempt_job_info()
+        try:
+            self.this.append(this)
+        except __builtin__.Exception:
+            self.this = this
+    __swig_destroy__ = _pbs.delete_preempt_job_info
+    __del__ = lambda self: None
+preempt_job_info_swigregister = _pbs.preempt_job_info_swigregister
+preempt_job_info_swigregister(preempt_job_info)
+
+RESV_NONE = _pbs.RESV_NONE
+RESV_UNCONFIRMED = _pbs.RESV_UNCONFIRMED
+RESV_CONFIRMED = _pbs.RESV_CONFIRMED
+RESV_WAIT = _pbs.RESV_WAIT
+RESV_TIME_TO_RUN = _pbs.RESV_TIME_TO_RUN
+RESV_RUNNING = _pbs.RESV_RUNNING
+RESV_FINISHED = _pbs.RESV_FINISHED
+RESV_BEING_DELETED = _pbs.RESV_BEING_DELETED
+RESV_DELETED = _pbs.RESV_DELETED
+RESV_DELETING_JOBS = _pbs.RESV_DELETING_JOBS
+RESV_DEGRADED = _pbs.RESV_DEGRADED
+RESV_BEING_ALTERED = _pbs.RESV_BEING_ALTERED
+RESV_IN_CONFLICT = _pbs.RESV_IN_CONFLICT
+
+def __pbs_errno_location():
+    return _pbs.__pbs_errno_location()
+__pbs_errno_location = _pbs.__pbs_errno_location
+
+def __pbs_server_location():
+    return _pbs.__pbs_server_location()
+__pbs_server_location = _pbs.__pbs_server_location
+
+def pbs_asyrunjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_asyrunjob(arg1, arg2, arg3, arg4)
+pbs_asyrunjob = _pbs.pbs_asyrunjob
+
+def pbs_alterjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_alterjob(arg1, arg2, arg3, arg4)
+pbs_alterjob = _pbs.pbs_alterjob
+
+def pbs_asyalterjob(c, jobid, attrib, extend):
+    return _pbs.pbs_asyalterjob(c, jobid, attrib, extend)
+pbs_asyalterjob = _pbs.pbs_asyalterjob
+
+def pbs_confirmresv(arg1, arg2, arg3, arg4, arg5):
+    return _pbs.pbs_confirmresv(arg1, arg2, arg3, arg4, arg5)
+pbs_confirmresv = _pbs.pbs_confirmresv
+
+def pbs_connect(arg1):
+    return _pbs.pbs_connect(arg1)
+pbs_connect = _pbs.pbs_connect
+
+def pbs_connect_extend(arg1, arg2):
+    return _pbs.pbs_connect_extend(arg1, arg2)
+pbs_connect_extend = _pbs.pbs_connect_extend
+
+def pbs_disconnect(arg1):
+    return _pbs.pbs_disconnect(arg1)
+pbs_disconnect = _pbs.pbs_disconnect
+
+def pbs_manager(arg1, arg2, arg3, arg4, arg5, arg6):
+    return _pbs.pbs_manager(arg1, arg2, arg3, arg4, arg5, arg6)
+pbs_manager = _pbs.pbs_manager
+
+def pbs_default():
+    return _pbs.pbs_default()
+pbs_default = _pbs.pbs_default
+
+def pbs_deljob(arg1, arg2, arg3):
+    return _pbs.pbs_deljob(arg1, arg2, arg3)
+pbs_deljob = _pbs.pbs_deljob
+
+def pbs_geterrmsg(arg1):
+    return _pbs.pbs_geterrmsg(arg1)
+pbs_geterrmsg = _pbs.pbs_geterrmsg
+
+def pbs_holdjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_holdjob(arg1, arg2, arg3, arg4)
+pbs_holdjob = _pbs.pbs_holdjob
+
+def pbs_loadconf(arg1):
+    return _pbs.pbs_loadconf(arg1)
+pbs_loadconf = _pbs.pbs_loadconf
+
+def pbs_locjob(arg1, arg2, arg3):
+    return _pbs.pbs_locjob(arg1, arg2, arg3)
+pbs_locjob = _pbs.pbs_locjob
+
+def pbs_movejob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_movejob(arg1, arg2, arg3, arg4)
+pbs_movejob = _pbs.pbs_movejob
+
+def pbs_msgjob(arg1, arg2, arg3, arg4, arg5):
+    return _pbs.pbs_msgjob(arg1, arg2, arg3, arg4, arg5)
+pbs_msgjob = _pbs.pbs_msgjob
+
+def pbs_relnodesjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_relnodesjob(arg1, arg2, arg3, arg4)
+pbs_relnodesjob = _pbs.pbs_relnodesjob
+
+def pbs_orderjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_orderjob(arg1, arg2, arg3, arg4)
+pbs_orderjob = _pbs.pbs_orderjob
+
+def pbs_rerunjob(arg1, arg2, arg3):
+    return _pbs.pbs_rerunjob(arg1, arg2, arg3)
+pbs_rerunjob = _pbs.pbs_rerunjob
+
+def pbs_rlsjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_rlsjob(arg1, arg2, arg3, arg4)
+pbs_rlsjob = _pbs.pbs_rlsjob
+
+def pbs_runjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_runjob(arg1, arg2, arg3, arg4)
+pbs_runjob = _pbs.pbs_runjob
+
+def pbs_selectjob(arg1, arg2, arg3):
+    return _pbs.pbs_selectjob(arg1, arg2, arg3)
+pbs_selectjob = _pbs.pbs_selectjob
+
+def pbs_sigjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_sigjob(arg1, arg2, arg3, arg4)
+pbs_sigjob = _pbs.pbs_sigjob
+
+def pbs_statfree(arg1):
+    return _pbs.pbs_statfree(arg1)
+pbs_statfree = _pbs.pbs_statfree
+
+def pbs_statrsc(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_statrsc(arg1, arg2, arg3, arg4)
+pbs_statrsc = _pbs.pbs_statrsc
+
+def pbs_statjob(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_statjob(arg1, arg2, arg3, arg4)
+pbs_statjob = _pbs.pbs_statjob
+
+def pbs_selstat(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_selstat(arg1, arg2, arg3, arg4)
+pbs_selstat = _pbs.pbs_selstat
+
+def pbs_statque(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_statque(arg1, arg2, arg3, arg4)
+pbs_statque = _pbs.pbs_statque
+
+def pbs_statserver(arg1, arg2, arg3):
+    return _pbs.pbs_statserver(arg1, arg2, arg3)
+pbs_statserver = _pbs.pbs_statserver
+
+def pbs_statsched(arg1, arg2, arg3):
+    return _pbs.pbs_statsched(arg1, arg2, arg3)
+pbs_statsched = _pbs.pbs_statsched
+
+def pbs_stathost(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_stathost(arg1, arg2, arg3, arg4)
+pbs_stathost = _pbs.pbs_stathost
+
+def pbs_statnode(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_statnode(arg1, arg2, arg3, arg4)
+pbs_statnode = _pbs.pbs_statnode
+
+def pbs_statvnode(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_statvnode(arg1, arg2, arg3, arg4)
+pbs_statvnode = _pbs.pbs_statvnode
+
+def pbs_statresv(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_statresv(arg1, arg2, arg3, arg4)
+pbs_statresv = _pbs.pbs_statresv
+
+def pbs_stathook(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_stathook(arg1, arg2, arg3, arg4)
+pbs_stathook = _pbs.pbs_stathook
+
+def pbs_get_attributes_in_error(arg1):
+    return _pbs.pbs_get_attributes_in_error(arg1)
+pbs_get_attributes_in_error = _pbs.pbs_get_attributes_in_error
+
+def pbs_submit(arg1, arg2, arg3, arg4, arg5):
+    return _pbs.pbs_submit(arg1, arg2, arg3, arg4, arg5)
+pbs_submit = _pbs.pbs_submit
+
+def pbs_submit_resv(arg1, arg2, arg3):
+    return _pbs.pbs_submit_resv(arg1, arg2, arg3)
+pbs_submit_resv = _pbs.pbs_submit_resv
+
+def pbs_delresv(arg1, arg2, arg3):
+    return _pbs.pbs_delresv(arg1, arg2, arg3)
+pbs_delresv = _pbs.pbs_delresv
+
+def pbs_terminate(arg1, arg2, arg3):
+    return _pbs.pbs_terminate(arg1, arg2, arg3)
+pbs_terminate = _pbs.pbs_terminate
+
+def pbs_modify_resv(arg1, arg2, arg3, arg4):
+    return _pbs.pbs_modify_resv(arg1, arg2, arg3, arg4)
+pbs_modify_resv = _pbs.pbs_modify_resv
+
+def pbs_preempt_jobs(arg1, arg2):
+    return _pbs.pbs_preempt_jobs(arg1, arg2)
+pbs_preempt_jobs = _pbs.pbs_preempt_jobs
+# This file is compatible with both classic and new-style classes.
+
+cvar = _pbs.cvar
+
diff --git a/pbs/pbsutils.py b/pbs/pbsutils.py
new file mode 100644
index 0000000000000000000000000000000000000000..3f9176f42bf8f7f7387e1ec31b5f7100a46da16c
--- /dev/null
+++ b/pbs/pbsutils.py
@@ -0,0 +1,490 @@
+'''
+Module that contains utility functions for the pbsweb application.
+
+This code was developed by Mike Lake <Mike.Lake@uts.edu.au>.
+
+License:
+
+  Copyright 2019 University of Technology Sydney
+
+  This program is free software: you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation, either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+'''
+
+# List of public objects that are imported by import *.
+__all__ = ['get_nodes', 'get_queues', 'get_jobs', 'get_node_totals', \
+           'node_attributes_reformat', 'queue_attributes_reformat', 'job_attributes_reformat']
+
+import pbs
+import os, datetime, time
+import re
+
+def _epoch_to_localtime(epoch_time, format_str):
+    '''
+    Converts an epoch time like 1426133709 into '2015-03-12 at 03:15 PM'.
+    '''
+    temp = time.localtime(int(epoch_time))
+    return time.strftime(format_str, temp)
+
+def _show_attr_name_remapping(conn):
+    '''
+    This is a debugging function. It displays all the resources_available, 
+    resources_assigned and their attributes and values.
+    '''
+    b = pbs.pbs_statvnode(conn, '', None, None)
+    while b != None:
+        attributes = {} # Init the dictionary to empty.
+        attribs = b.attribs # The parameter attrib is a pointer to an attrl structure.
+        attributes['node_name'] = b.name
+        while attribs != None:
+            if attribs.resource != None:
+                print('    ', attribs.name, ':', attribs.resource, '=', attribs.value)
+                keyname = '%s_%s' % (attribs.name, attribs.resource)
+                attributes[keyname] = attribs.value
+            else:
+                attributes[attribs.name] = attribs.value
+
+            attribs = attribs.next
+
+        b = b.next
+
+def get_nodes (conn):
+    '''
+    Get information on the PBS nodes. It is the equivalent of "pbsnodes -a".
+    This function returns a list of nodes, where each node is a dictionary.
+
+    Uncommenting the print statements in this function will show information like this:
+
+      ------------ hpcnode20 ------------------
+      Mom : hpcnode20
+      Port : 15002
+      pbs_version : 14.2.2.20170505010934
+      ntype : PBS
+      state : free
+      pcpus : 28
+      jobs : 100932.hpcnode0/0, 100932.hpcnode0/1, 100932.hpcnode0/2, 100932.hpcnode0/3,
+             100967.hpcnode0/1, 100967.hpcnode0/2, 100967.hpcnode0/3
+        resources_available : arch = linux
+        resources_available : host = hpcnode20
+        resources_available : mem = 529331720kb
+        resources_available : ncpus = 28
+        resources_available : vnode = hpcnode20
+        resources_assigned : accelerator_memory = 0kb
+        resources_assigned : icpus = 0
+        resources_assigned : mem = 524288000kb
+        resources_assigned : naccelerators = 0
+        resources_assigned : ncpus = 7
+        resources_assigned : ngpus = 0
+        resources_assigned : vmem = 0kb
+      resv_enable : True
+      sharing : default_shared
+
+    To make the returned dictionary simpler we rename all the resources_available and
+    resources_assigned above to be a key like this:
+        ...
+        resources_available : mem  => resources_available_mem
+        resources_assigned : ncpus => resources_assigned_ncpus
+        resources_assigned : ngpus => resources_assigned_ngpus
+        ... etc
+    This is done in the line below:
+        keyname = '%s_%s' % (attribs.name, attribs.resource)
+
+    We then append this dictionary to the list of nodes.
+
+    '''
+    nodes = [] # This will contain a list of dictionaries.
+
+    # The function pbs_statvnode (and likewise pbs_statque & pbs_statjob)
+    # returns a batch_status structure.
+    b = pbs.pbs_statvnode(conn, '', None, None)
+    while b != None:
+        attributes = {} # Init the dictionary to empty.
+        attribs = b.attribs # The parameter attrib is a pointer to an attrl structure.
+        #print('------------', b.name, '------------------')
+        attributes['node_name'] = b.name
+        while attribs != None:
+            if attribs.resource != None:
+                # The debugging print below here is indented a bit more to distinguish
+                # resource attributes from non-resource attributes.
+                #print('    ', attribs.name, ':', attribs.resource, '=', attribs.value)
+                keyname = '%s_%s' % (attribs.name, attribs.resource)
+                attributes[keyname] = attribs.value
+            else:
+                #print('  ', attribs.name, ':', attribs.value)
+                # e.g. acl_user_enable : True
+                attributes[attribs.name] = attribs.value
+
+            # This line must be present or you will loop forever!
+            attribs = attribs.next
+
+        nodes.append(attributes)
+        b = b.next
+
+    # Sort the nodes by the node's name.
+    nodes = sorted(nodes, key=lambda k: k['node_name'])
+
+    return nodes
+
+def get_queues(conn):
+    '''
+    Get information on the PBS queues.
+    This function returns a list of queues, where each queue is a dictionary.
+
+    Example: Queue Name = smallq
+
+    if attribs.resource == None    <== we get the attribs:
+       name       : value
+       ----         -----
+       queue_type : Execution
+       total_jobs : 49
+       state_count : Transit:0 Queued:18 Held:0 Waiting:0 Running:30 Exiting:0 Begun:1
+       max_run : [u:PBS_GENERIC=12]
+       enabled : True
+       started : True
+
+    if attribs.resource != None    <== we get the attribs:
+       name          :      resource = value
+       ----                 --------   -----
+       resources_max :      mem      = 32gb
+       resources_max :      ncpus    = 2
+       resources_max :      walltime = 200:00:00
+       resources_default :  walltime = 24:00:00
+       resources_assigned : mem      = 598gb
+       resources_assigned : ncpus    = 57
+       resources_assigned : nodect   = 29
+
+    To make the returned dictionary simpler we rename the name:resource above
+    to be a key like this:
+
+    resources_max : mem          =>  resources_max_mem
+    resources_max : ncpus        =>  resources_max_ncpus
+    resources_max : walltime     =>  resources_max_walltime
+    resources_default : walltime =>  resources_default_walltime
+    resources_assigned : mem     =>  resources_assigned_mem
+    resources_assigned : ncpus   =>  resources_assigned_ncpus
+    resources_assigned : nodect  =>  resources_assigned_nodect
+    '''
+
+    queues = [] # This will contain a list of dictionaries.
+
+    # Some of the attributes are not present for all queues so we list them all
+    # here and in the loop below set them to None. For instance, a routing queue
+    # does not have some of these attributes.
+    attribute_names = ['resources_max_mem','resources_max_ncpus','resources_max_walltime', \
+            'resources_assigned_mem','resources_assigned_ncpus', \
+            'resources_default_walltime', 'max_run', 'state_count', 'acl_user_enable']
+
+    b = pbs.pbs_statque(conn, '', None, None)
+    while b != None:
+        attributes = {} # Init the dictionary to empty.
+        for name in attribute_names:
+            attributes[name] = None
+
+        attribs = b.attribs
+        #print('METHODS: ', dir(attribs))  # Uncomment to see what methods are available.
+        #print('------------ Queue %s ------------' % b.name)
+        attributes['queue_name'] = b.name
+        while attribs != None:
+            if attribs.resource != None:
+                # The print below here is indented a bit more to distinguish
+                # resource attributes from non-resource attributes.
+                #print('    ', attribs.name, ':', attribs.resource, '=', attribs.value)
+                keyname = '%s_%s' % (attribs.name, attribs.resource)
+                attributes[keyname] = attribs.value
+            else:
+                #print('  ', attribs.name, ':', attribs.value)
+                # e.g. acl_user_enable : True
+                attributes[attribs.name] = attribs.value
+
+            attribs = attribs.next
+
+        # Don't save the defaultq as this is a routing queue.
+        # TODO move this to reformat?
+        if attributes['queue_name'] != 'defaultq':
+            queues.append(attributes)
+
+        b = b.next
+
+    return queues
+
+def get_jobs(conn, extend=None):
+    '''
+    Get information on the PBS jobs.
+    This function returns a list of jobs, where each job is a dictionary.
+
+    This is the list of resources requested by the job, e.g.:
+      Resource_List : mem = 120gb
+      Resource_List : ncpus = 24
+      Resource_List : nodect = 1
+      Resource_List : place = free
+      Resource_List : select = 1:ncpus=24:mem=120GB
+      Resource_List : walltime = 200:00:00
+
+    These are non-resource attributes, e.g.
+        Job_Name : AuCuZn
+        Job_Owner : 999777@hpcnode0
+        job_state : Q
+        queue : workq
+        server : hpcnode0
+      etc ....
+
+    '''
+
+    jobs = [] # This will contain a list of dictionaries.
+
+    # Some jobs don't yet have a particular attribute as the job hasn't started yet.
+    # We have to create that key and set it to something, otherwise we get errors like:
+    #   NameError("name 'resources_used_ncpus' is not defined",)
+    attribute_names = ['resources_used_ncpus', 'resources_used_mem', 'resources_used_vmem', \
+        'resources_used_walltime', 'exec_host', 'exec_vnode', 'stime', 'etime', 'resources_time_left', \
+        'resources_used_cpupercent']
+
+    b = pbs.pbs_statjob(conn, '', None, extend)
+    while b != None:
+        attributes = {} # Init the dictionary to empty.
+        # Init the values of the attributes.
+        for name in attribute_names:
+            attributes[name] = ''
+        for name in ['resources_used_walltime', 'resources_used_cput', 'resource_list_walltime']:
+            attributes[name] = '0:0:0'
+
+        attribs = b.attribs
+        #print('-----------', b.name, '-------------------')
+        attributes['job_id'] = b.name.split('.')[0] # b.name is a string like '137550.hpcnode0'
+        while attribs != None:
+            if attribs.resource != None:
+                #print('    ', attribs.name, ':', attribs.resource, '=', attribs.value)
+                keyname = '%s_%s' % (attribs.name, attribs.resource)
+                keyname = keyname.lower()
+                attributes[keyname] = attribs.value
+            else:
+                #print('  ', attribs.name, ':', attribs.value)
+                keyname = attribs.name.lower()
+                attributes[keyname] = attribs.value
+
+            attribs = attribs.next
+
+        jobs.append(attributes)
+        b = b.next
+
+    return jobs
+
+def get_node_totals(nodes):
+    '''
+    Get totals of some attributes for all the nodes.
+    '''
+    totals = {}
+    totals['jobs_total'] = 0     # Total of all jobs across the cluster.
+    totals['cpus_available'] = 0 # Total of all available cpus across the cluster.
+    totals['cpus_assigned'] = 0  # Total of all assigned cpus across the cluster.
+    totals['mem_available'] = 0  # Total of all available memory across the cluster.
+    totals['mem_assigned'] = 0   # Total of all assigned memory across the cluster.
+
+    for n in nodes:
+        totals['jobs_total'] = totals['jobs_total'] + len(n['jobs'])
+        totals['cpus_available'] = totals['cpus_available'] + int(n['resources_available_ncpus'])
+        totals['cpus_assigned'] = totals['cpus_assigned'] + int(n['resources_assigned_ncpus'])
+        totals['mem_available'] = totals['mem_available'] + int(n['resources_available_mem'])
+        totals['mem_assigned'] = totals['mem_assigned'] + int(n['resources_assigned_mem'])
+
+    totals['cpus_ratio'] = int(100 * float(totals['cpus_assigned']) / float(totals['cpus_available']) )
+    totals['mem_ratio']  = int(100 * float(totals['mem_assigned'])  / float(totals['mem_available']) )
+
+    return totals
+
+def node_attributes_reformat(nodes):
+
+    for node in nodes:
+        #print('---------')
+        #for attribute in node.keys():
+        #    print('    ', attribute, node[attribute])
+
+        # There are certain keys that we always want to be present.
+        # If they are not present create them with zero value.
+        for attribute in \
+            ['resources_available_mem', 'resources_available_ncpus', 'resources_available_ngpus', \
+             'resources_assigned_mem', 'resources_assigned_ncpus', 'resources_assigned_ngpus']:
+            if attribute not in node.keys():
+                node[attribute] = 0
+
+        if 'comment' not in node.keys():
+            node['comment'] = ''
+        if 'jobs' not in node.keys():
+            node['jobs'] = ''
+
+        # Change jobs from string to a list.
+        # jobs is a string like this:
+        #   105059.hpcnode0/0, 105059.hpcnode0/1, 105059.hpcnode0/2, 105059.hpcnode0/3,     \ Job 105059
+        #   105059.hpcnode0/4, 105059.hpcnode0/5, 105059.hpcnode0/6, 105059.hpcnode0/7,     /
+        #   105067.hpcnode0/8, 105067.hpcnode0/9, 105067.hpcnode0/10, 105067.hpcnode0/11,   \ Job 105067
+        #   105067.hpcnode0/12, 105067.hpcnode0/13, 105067.hpcnode0/14, 105067.hpcnode0/15, /
+        #   105068.hpcnode0/16, 105068.hpcnode0/17, 105068.hpcnode0/18, 105068.hpcnode0/19, \ Job 105068
+        #   105068.hpcnode0/20, 105068.hpcnode0/21, 105068.hpcnode0/22, 105068.hpcnode0/23  /
+        if node['jobs']:
+            # remove whitespace from string
+            jobs_string = node['jobs'].replace(' ', '')
+            # split on comma, then take first part of split on '.' & turn it into a set.
+            jobs_unique = set([j.split('.')[0] for j in jobs_string.split(',')])
+            # Turn it back into a list which will now be the unique jobs
+            node['jobs'] = list(jobs_unique)
+        else:
+            node['jobs'] = []
+
+        # Change memory from string with kb (eg '264501336kb') to integer in Gb (eg 264).
+        if node['resources_available_mem']:
+            m = re.match('^([0-9]+)kb$', node['resources_available_mem'])
+            node['resources_available_mem'] = '%d' % (int(m.group(1))/1024/1024)
+        if node['resources_assigned_mem']:
+            m = re.match('^([0-9]+)kb$', node['resources_assigned_mem'])
+            node['resources_assigned_mem'] = '%d' % (int(m.group(1))/1024/1024)
+
+        # Create a new attribute 'state_up' to indicate if the node is up or not as
+        # 'state' can be one of busy, free, job-busy, job-exclusive, down, or offline.
+        # If busy, free, job-busy, job-exclusive <-- OK node is up.
+        # If down, offline                       <-- Problem, node is down.
+        node['state_up'] = True
+        if 'down' in node['state'] or 'offline' in node['state']:
+            node['state_up'] = False
+
+        # Create a new attribute 'cpu_ratio' to use in the web display.
+        if node['resources_available_ncpus'] != 0:
+            node['cpu_ratio'] = 100 * int(node['resources_assigned_ncpus']) \
+                / int(node['resources_available_ncpus'])
+        else:
+            node['cpu_ratio'] = 0
+
+        # Create a new attribute 'mem_ratio' to use in the web display.
+        node['mem_ratio'] = 100 * int(node['resources_assigned_mem']) \
+            / int(node['resources_available_mem'])
+
+    return nodes
+
+def queue_attributes_reformat(queues):
+
+    # Here we cover the special case of formatting the state count.
+    # It is an attribute like this:
+    #   state_count : Transit:0 Queued:11 Held:0 Waiting:0 Running:20 Exiting:0 Begun:0
+    # and we want it as a dictionary like this:
+    #   state_count { 'Transit':0 'Queued':11 'Held':0 'Waiting':0 'Running':20 'Exiting':0 'Begun':0
+    for queue in queues:
+        this_state = {}
+        for key in queue.keys():
+            if key == 'state_count':
+                state_count_list = queue['state_count'].split()
+                for item in state_count_list:
+                    (name,value) = item.split(':')
+                    this_state[name] = int(value)
+            if key == 'max_run':
+                max_run = int(queue['max_run'].split('=')[1].replace(']',''))
+        queue['max_run'] = max_run
+        queue['state_count'] = this_state
+
+        # Get the jobs queued and running from the state_count and not total_jobs.
+        queue['jobs_running'] = queue['state_count']['Running']
+        queue['jobs_queued']  = queue['state_count']['Queued']
+
+    return queues
+
+def job_attributes_reformat(jobs):
+    '''
+    Reformat job attributes like changing epoch time to local time,
+    queue codes to more understandable words, memory from bytes to MB or GB.
+    '''
+
+    for job in jobs:
+        # There are some keys that we will never use, remove them.
+        job.pop('variable_list', None)
+        job.pop('submit_arguments', None)
+        job.pop('error_path', None)
+        job.pop('output_path', None)
+
+        # Jobs might be split across hosts or vhosts in which case it will look like this:
+        # e.g. exec_node = hpcnode03/1+hpcnode04/1
+        #      exec_vnode = (hpcnode03:ncpus=1:mem=5242880kb)+(hpcnode04:ncpus=1:mem=5242880kb)
+        # Users may wish to use either exec_node or exec_vhost in their HTML templates for 
+        # displaying what host/vnode their job is running on. Here we format both into just strings.
+        if job['exec_host']:
+            # e.g. exec_host = hpcnode03/1+hpcnode04/1
+            # Splitting on the + will give a list ['hpcnode03/1', 'hpcnode04/1']
+            # Then the list comprehension and split will turn this into ['hpcnode03', 'hpcnode04']
+            # Finally convert this into a string. Use whitespace delimiter so HTML pages will wrap it if needed.
+            job['exec_host'] = job['exec_host'].split('+')
+            job['exec_host'] = [s.split('/')[0] for s in job['exec_host']]
+            job['exec_host'] = ' '.join(job['exec_host'])
+        if job['exec_vnode']:
+            # e.g. exec_vnode = (hpcnode03:ncpus=1:mem=5242880kb)+(hpcnode04:ncpus=1:mem=5242880kb)
+            # Splitting on the + will give [(hpcnode03:ncpus=1:mem=5242880kb), (hpcnode04:ncpus=1:mem=5242880kb)]
+            # Then the list comprehension and split etc gives ['hpcnode03', 'hpcnode04']
+            # Finally convert this into a string. Use whitespace delimiter so HTML pages will wrap it if needed.
+            job['exec_vnode'] = job['exec_vnode'].split('+')
+            job['exec_vnode'] = [s.split(':')[0].lstrip('(') for s in job['exec_vnode']]
+            job['exec_vnode'] = ' '.join(job['exec_vnode'])
+
+        # This splits user_name@hostname to get just the user_name.
+        job['job_owner'] = job['job_owner'].split('@')[0]
+
+        # All times are in seconds since the epoch
+        # ctime = time job was created             e.g. ctime = Fri Mar  6 14:36:07 2015
+        # qtime = time job entered the queue       e.g. qtime = Fri Mar  6 14:36:07 2015
+        # etime = time job became eligible to run  e.g. etime = Fri Mar  6 14:36:07 2015
+        # stime = time job started execution       e.g. stime = Fri Mar  6 14:36:07 2015
+        # mtime = time job was last modified       e.g. mtime = Tue Mar 17 13:09:19 2015
+
+        # Calculate a wait time = time started - time entered queue. This will be in seconds.
+        if job['qtime'] and job['stime']:
+            job['wtime'] = int(job['stime']) - int(job['qtime'])
+            job['wtime'] = '%.0f' % (job['wtime'] / 3600.0) # convert to hours
+        else:
+            job['wtime'] = ''
+
+        # Change time since epoch to localtime.
+        # If the job has not yet queued or started then that time will be ''.
+        if job['qtime']:
+            job['qtime'] = _epoch_to_localtime(job['qtime'], "%Y-%m-%d at %I:%M %p")
+        if job['stime']:
+            job['stime'] = _epoch_to_localtime(job['stime'], "%Y-%m-%d at %I:%M %p")
+
+        # If the job was queued or started today remove the leading date.
+        today = datetime.datetime.now().strftime('%Y-%m-%d')
+        if today == job['qtime'].split()[0]:
+            job['qtime'] = job['qtime'].replace('%s at' % today, '')
+            job['stime'] = job['stime'].replace('%s at' % today, '')
+
+        # Change queue code to a word. For queue states see man qstat.
+        states = {'B':'Array job', 'E':'Exiting','F':'Finished','H':'Held','M':'Moved',\
+                  'Q':'Queued','R':'Running','S':'Suspend','T':'Transiting','U':'User,suspend',\
+                  'W':'Waiting', 'X':'Finished'}
+        job['job_state'] = states[job['job_state']]
+
+        # Calculate a time left from list walltime and used walltime.
+        if job['resources_used_walltime']:
+            (H, M, S) = job['resources_used_walltime'].split(':')
+            used_walltime = float(H) + float(M)/60.0 + float(S)/3600.0 
+            (H, M, S) = job['resource_list_walltime'].split(':')
+            list_walltime = float(H) + float(M)/60.0 + float(S)/3600.0 
+            # TODO maybe convert this to a float with one decimal place? or raw float
+            job['resources_time_left'] = int(list_walltime) - int(used_walltime)
+
+        # Change memory from string in kb (eg '264501336kb') to integer Gb (eg 264).
+        if 'resource_list_mem' in job:
+            job['resource_list_mem'] = job['resource_list_mem'].replace('gb', '')
+        if job['resources_used_mem']:
+            m = re.match('^([0-9]+)kb$', job['resources_used_mem'])
+            job['resources_used_mem'] = '%d' % (int(m.group(1))/1024/1024)
+        if job['resources_used_vmem']:
+            m = re.match('^([0-9]+)kb$', job['resources_used_vmem'])
+            job['resources_used_vmem'] = '%d' % (int(m.group(1))/1024/1024)
+
+    return jobs
+
diff --git a/pbs/swig_compile_pbs.sh b/pbs/swig_compile_pbs.sh
new file mode 100755
index 0000000000000000000000000000000000000000..42ba2babe68306b1e2d99969a28b2d33df41a018
--- /dev/null
+++ b/pbs/swig_compile_pbs.sh
@@ -0,0 +1,44 @@
+#!/bin/bash
+
+# Uses the PBS files pbs_ifl.h and pbs.i to create pbs.py, pbs_wrap.c and _pbs.so.
+
+conf="/etc/pbs.conf"  # PBS configuration file
+
+#############################
+# Set your configuration here
+#############################
+
+# Example: If using my laptop.
+PYTHON_INCL="/usr/include/python3.8"
+
+# Example: If using Python in a virtual environment.
+#PYTHON_INCL=/var/www/wsgi/virtualenvs/pbsweb/include/python3.8
+
+SWIG_EXEC="/usr/bin/swig"
+
+# You should not need to change anything below here.
+
+# Make sure we have a PBS config file.
+if [ ! -f $conf ]; then
+   echo "Error: missing PBS configuration file $conf"
+   exit 0
+fi 
+
+# The PBS config file must be sourced to provide $PBS_EXEC.
+. $conf
+
+# Running swig creates pbs.py and pbs_wrap.c
+$SWIG_EXEC -I$PBS_EXEC/include -python pbs.i
+
+if [ $? -ne 0 ]; then
+    echo 'Error: You are probably missing the file: /opt/pbs/include/pbs_ifl.h'
+    exit 0
+fi
+
+# Running gcc creates _pbs.so
+gcc -shared -fPIC -I$PYTHON_INCL -I$PBS_EXEC/include pbs_wrap.c $PBS_EXEC/lib/libpbs.a \
+    -o _pbs.so -L/lib -lcrypto -lssl
+
+# It does not need to be executable.
+chmod ugo-x _pbs.so
+