Sunday, March 31, 2013

Configuring TLS on Postfix - RedHat AWS EC2 instance - Amazon Cloud

Please note. This didn't really end up working. Leaving notes for all the things I did get working however. This is old and not recommended - just use the AWS mail services or some other cloud based mail service and save yourself some pain.

AWS SES
AWS Workmail
Gmail

---
Attempting to set up Postfix on Amazon EC2 instance with TLS and authentication to work with Postini mail security service. Steps taken (Caveat - I have never done this before today and currently re-learning Linux):

If you haven't already created a private key create one becausee you'll need it to connect to Linux server below:
 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-login-password

If you haven't already created a VPC (virtual private network) create one, or don't use VPC in the example below.

AWS documentation has four scenarios for setting up VPCs for different purposes:


http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarios.html

Configure the VPC security group you use for the mail server below to the IP address of the machine you are using to administer the EC2 instance. You can find your IP address as the outside world sees it (not the local IP address internal to your network) at the top of http://dnsstuff.com You can give this IP access to all ports and protocols or specifically the ports and protocols needed to set up the mail server (22 for SSH, 25 for SMTP, any non-standard ports you want to send mail from, etc).


You'll also need to open up access to Postini and from any servers from which you want to relay mail to Postini on applicable mail ports (25, etc). See link regarding Postini outbound configuration in steps below.

Hint: Before even attempting anything with Postfix, make sure your network connectivity is set up correctly (your VPC and firewalls on machines). If at any point you are trying to send mail from another machine through the Postfix server and no messages are showing up in Postfix logs, it could be a network problem that has nothing to do with Postfix. A connection may have never been made to/from the local or remote machine which has nothing to do with Postfix. You'll need to fix that first. You can test network connectivity by opening up all ports between the various machines and IPs or at least the port and protocol required to ping and test that you can ping and get a reply from the postfix machine on the machine you are trying to send mail from. Once that's working then move on to Postfix configuration.

I checked and by default the Amazon AWS ec2 redhat instances do not have a firewall running. I did have to adjust the firewall on a Windows instance to open the necessary ports to access the Linux machine.

Ok now for the nitty gritty.

The goal here is to send mail from an application server through the mail server to Postini, which will then be verified encrypted and routed to the appropriate party. Using Postini ensures only valid IP ranges can send mail outbound and you can enforce TLS encryption.Postini also checks outbound mail for viruses and spam.


--- Get EC2 Linux RedHat instance from Amazon ---

1. login and go to EC2 Dashboard
2. Click "Launch instance"
3. Choose classic wizard
4. Click continue
5. Click Community AMIs
6. Enter Amazon Linux ID you want to use (ami-8e27adbe for oregon or see bottom of this page http://aws.amazon.com/amazon-linux-ami/)
Note: Make sure you select an ami in the same region where you created your VPC.
 7. Click select
8. Choose size (T1 Micro for my test)
9. Click on EC2-VPC tab (assuming you created a VPC)
10. Choose an Internet accessible subnet for your server where the VPC security group you want to use for your mail server is located.
11. Click continue on next page (I didn't change anything)
12. Update the mail tag to some name that helps you identify this server
13. Click continue
14. Choose a key pair you created previously or create a new one
15. Choose a security group - in my case I created a mail specifc security group that can access my Postini specified IP range for my account and allows access from the servers in my network that are allowed to send mail.

I opened up the ports I have configured my apps to send mail on and 25/465 to Postini (seems to only be sending on 25 at this point but my apps send mail to the mail server on other ports)

16.Click continue
17. Click launch
18. Click close (unless you want to create alarms)
19. Wait for your instance to fire up -- will say "initializing" in your instance list
20. Assign an elastic IP address from your VPC to your instance (you may have problems connecting without it)
20. Right click on the instance and choose connect
21. Enter the path to your private key and connect - you should now be able type commands that execute on your server in the window that pops up.

--- DNS - if you want ---

Configure a domain name to point to your IP address. For exmaple you may want to configure mail.yourdomain.com to point to your elastic IP address.

I'm not going into details on the above - contact your domain name provider, use a distributed, redundant DNS service like http://EasyDNS.com or check out the options from Amazon DNS service which they call "Route 53" http://aws.amazon.com/route53/.

Configure an SPF record in your DNS to indicate that only Postini IPs are valid IPs for sending mail from your domain - to help prevent people from spoofing your emails or having your mails end up in spam mailboxes (hopefully).

Something like this, though I want to go back and review this later to make sure it is the best option:

v=spf1 ip4:64.18.0.0/20 ~all


--- Install and Configure Postfix ---

Install Postfix and get rid of Sendmail if you want.

sudo yum install postfix
sudo yum erase sendmail

Note: If you have problems connecting to AWS repo see this post:
http://websitenotebook.blogspot.com/2013/03/connection-timeout-running-yum-on-ec2.html

You might want to back up the original postfix configuration files:

sudo cp /etc/postfix/main.cf /etc/postfix/bak/main.cf
sudo cp /etc/postfix/master.cf /etc/postfix/bak/master.cf

Figure out how to use vi again if you forgot like me and edit main.cf in the postfix directory

Navigate to /etc/postfix

Use VI to edit main.cf:

sudo vi main.cf

hit "i" to insert text

When you are done editing save by typing ctrl-C then:

:wq!

Edit the main.cf file with the following settings:

myhostname = [mail.yourdomain.com or whatever your mail server hostname is]

mydomain  = [mail.yourdomain.com or whatever your mail domain is]

inet_interfaces = all  [and comment out anything else]

mydestination = $mydomain, $myhostname

mynetworks = [ip addresses/ranges for which you want to allow relay - limit to trusted networks]

relayhost=outbounds[x].obsmtp.com (where [x] is the number in the domain when you login at Postini -- indicating which particular Postini server(s) your account uses)

Type ctrl-C and :wq! to save and exist the file

restart the service:

sudo service postifix restart

--- Add your new instance as a reinjection host at Postini ---

At this point, make sure your VPC is allowing access from the mail server via the VPC security group the mail server is in to the Postini IP addresses for your Postini account.

You can then add your new mail server VPC elastic IP address at Postini so Postini can send bounce messages back to your mail server.

Postini will test connection to your mail server and that it can send mail at this point so you'll know if something is wrong. Check the VPC and your Postfix Configuration if Postini can't connect to your mail server at this point.

Postini outbound configuration and IP addresses:

http://www.google.com/support/enterprise/static/postini/docs/admin/en/admin_ee_cu/ob_setup.html 

--- Send a message ---

...ok at this point you should have mail going over to Postini if your firewalls are set up correctly. You can send a test message with this command:


sendmail [put email address here]
FROM: [put email address here like ec2-user@your.domain.com]
SUBJECT: hello world
this is a test email
.

Check postfix logs to see if your message was sent. 

sudo tail -f /var/log/maillog

If you limit sending to TLS you'll see an error message related not being able to send unless TLS is enabled

451 STARTTLS is required for this sender - psmtp (in reply to MAIL FROM command))

Install mutt to check and see if you get a bounce message.

sudo yum install mutt

To run mutt must type: 

mutt
 
When you're done reading  messages type

q

There won't be a bounce because the message is still in the postfix queue. Check:

postqueue -p
  
You'll see something like this:
 
96D8C200FB      319 Sun Apr  7 05:44:10  ec2-user@mx.domain.com
(host outbounds5.obsmtp.com[64.18.6.12] said: 451 STARTTLS is required for this sender - psmtp (in reply to MAIL FROM command))
                                         user@domain.com

-- 0 Kbytes in 1 Request.


If you log into Postini and change your settings to allow SMTP the message will go through: 

Login, click on outbound servers, choose your domain, click on TLS.

1. Choose how the email protection service accepts outbound messages from your mail server. 


Choose: Accept SMTP and TLS.

Flush the Postmail queue:

sudo postfix flush
     
Now use tail command above and you should see the mail was sent.

Apr  7 06:52:05 ip-10-0-0-112 postfix/smtp[18321]: 96D8C200FB: to=, relay=outbounds5.obsmtp.com[64.18.6.12]:25, delay=4076, delays=4072/0.03/0.15/3, dsn=2.0.0, status=sent (250 Thanks)
Apr  7 06:52:05 ip-10-0-0-112 postfix/qmgr[18226]: 96D8C200FB: removed

 

That's nice. But that's not really what we want. We just sent unencrypted email over the internet which can be read by anyone.

Go back to Postini and choose the option to accept only TLS and save.

This might be a good point to make a back up of your mail config as well.

--- Get an SSL certificate for TLS ---

I  noticed that when I installed Postfix, OpenSSL was installed:

Processing Dependency: libcrypto.so.10

(OPENSSL_1.0.1)(64bit) for package: 2:postfix-2.6.6-2.14.amzn1.x86_64
 

....

Package openssl.x86_64 0:1.0.0j-1.43.amzn1 will be updated
---> Package openssl.x86_64 1:1.0.1e-4.53.amzn1 will be an update
--> Processing

Dependency: make for package: 1:openssl-1.0.1e-4.53.amzn1.x86_64
--> Running transaction check
---> Package

make.x86_64 1:3.81-20.7.amzn1 will be installed
--> Finished Dependency Resolution

Dependencies Resolve


If needed you could install openssl:

sudo yum install openssl

Make a directory for your ssl files and migrate to it:

sudo mkdir /etc/postfix/certs

Digicert has some tools to generate a CSR using openssl:

Info about OpenSSL CSR Creation:

http://www.digicert.com/csr-creation-apache.htm

A form you can fill out to create the code that generates the CSR on your server:

https://www.digicert.com/easy-csr/openssl.htm

The command it creates will look *something* like this. This has my cert specific info so make sure you go to the page above and create a CSR with your specific domain name, location, etc. I added -outform PEM because these things need to be in PEM format and I saw that on another web site. Not sure if it matters.

openssl req -new -outform PEM -newkey rsa:2048 -nodes -out domain_com.csr -keyout star_domain_com.key -subj "/C=US/ST=WA/L=Seattle/O=Radical Sofftware Inc./CN=*.domain.com"

Run the above command in your directory to get the new key and csr files.

Go to Digicert and request a new Apache cert. Paste the contents of the CSR file that was generated in your /etc/postfix/certs directory when you ran the above command into the text box for the request on the web site. See Digicert link with instructions above for more info.

Submit and wait for the new cert to be generated (varies depending on if getting new or existing).

When your cert is ready go to the Digicert web site (I prefer not to use the one that comes in the email), log into your account, and download the new cert.

Choose option for PEM file with all certs in it.

Save it in your /etc/postfix/certs directory as domain.PEM where domain is your domain name.

Download and install putty so you can telnet to your Linux machine.Run or use your telnet tool of choice.

http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Enter your elastic IP and choose Telnet. Click open.
  Type 

EHLO [your mail serer domain here]


For some reason the first time I type it doesn't work. I type it again and get the desired response showing TLS is running:


250-your.mailserver.tld
250-PIPELINING
250-SIZE 10240000
250-ETRN
250-STARTTLS
250 8BITMIME
 
Try another test message as described above. You'lll see the same error in the logs when you tail (451...). This is because what we just enabled is for inbound mail.

What we need to do now is encrypt outbound mail.

--- Add outbound mail configuration for TLS ---

mtp_tls_cert_file = /etc/postfix/certs/cert.pem
smtp_tls_key_file = /etc/postfix/certs/key.pem
smtp_use_tls = yes
smtp_enforce_tls = no

 

Check it out to see if our message now gets sent.

sudo postfix reload
sudo postfix flush
sudo tail -f /var/log/maillog

Using tail it looks like the message got sent but there's an error message about an untrusted issuer:

Apr  7 08:09:08 ip-10-0-0-112 postfix/smtp[18657]: certificate verification failed for outbounds5.obsmtp.com[64.18.6.12]:25: untrusted issuer /C=US/O=Equifax/OU=Equifax Secure Certificate Authority

What is odd is that the message does actually get sent. You can verify this by checking the queue and it will be empty:

sudo postqueue -p 
Mail queue is empty
 

What is also odd is that the mail doesn't show up right away in the mailbox I sent it to but does after many hours of delay. (Like I went to bed and in the morning it showed up).

I contacted DigiCert at this point and was told:

Looking at the error, it states your server is pulling an Equifax certificate.  If you have installed and bound our certificate successfully, then all you would need to do is reboot the server for it to start using our certificate.  If the reboot does not fix your problem, you will need to check the bindings on your server and correct them.

Go to ec2 instances in AWS EC2 dashboard. Right click on instance. Choose reboot.

I tried to send a new email and got the same error in the logs.

Went to DigiCert web site and downloaded individual files, zipped.

Copied DigiCertCA.crt to /etc/postfix/certs and renamed with .PEM extension.

Restarted postfix and got an error saying for some reason could not load the ca file so was disabling TLS.

Apr  7 17:03:08 ip-10-0-0-112 postfix/smtp[1687]: cannot load Certificate Authority data: disabling TLS support

I checked and the file was missing the first few letters. I am not sure if this is a problem when pasting text from a windows machine into the MindTerm AWS Linux client or I just copied only part of the file. I fixed that, saved the file, reloaded postfix.

No error. Message sent!

Apr  7 17:07:31 ip-10-0-0-112 postfix/postfix-script[1714]: refreshing the Postfix mail system
Apr  7 17:07:31 ip-10-0-0-112 postfix/master[1549]: reload -- version 2.6.6, configuration /etc/postfix
Apr  7 17:07:36 ip-10-0-0-112 postfix/qmgr[1719]: 6469620158: from=
, size=326, nrcpt=1 (queue active)
Apr  7 17:07:39 ip-10-0-0-112 postfix/smtp[1728]: 6469620158: to=
, relay=outbounds5.obsmtp.com[64.18.6.12]:25, delay=683, delays=681/0.08/0.31/2.3, dsn=2.0.0, status=sent (250 Thanks)
Apr  7 17:07:39 ip-10-0-0-112 postfix/qmgr[1719]: 6469620158: removed


Sweet. That only took a million tries to get this all working.

Only problem now is that the mail hasn't shown up in the recipient inbox.

What is also interesting is that even though my server indicates the message was sent, I see no indication in the Postini outbound logs for this domain that the message was received.

Next I looked at the quarantine for the user I am sending to and I see that the messages are, in fact there. They have been flagged as spam. Probably tripped up after sending too many requests. Should not be flagged as spam otherwise, because my SPF records are set to include the new AWS elastic IP address.


This would be another excellent point to back up your main.cf file. :)

--- Open other ports if needed ---

Ok now I want to open up some other ports to send mail on from other applications that require specific ports.

sudo vi /etc/postfix/master.cf

Add a line for the port you want to open up (make sure your VPC allows connections on this port). You can see the line I added bellow in bold. Note that the type is inet to allow connections from the Internet and the command is smtpd.

# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (yes)   (never) (100)
# ==========================================================================
smtp      inet  n       -       n       -       -       smtpd 

468       inet  n       -       n       -       -       smtpd 


Check the logs to make sure everything is groovy. No errors. Like an i at the start of the file when typing i for insert too many times which causes you to get a message about an invalid transport type. Not that I ever did such a thing.

Now open putty as instructed above and test EHLO but this time connect on your new port - 468 in the example above. If you get the expected results with EHLO your mail system is open and listening on that port.

Another fine time to back up master.cf

Now you possibly want to authenticate users (i.e. have them login with a user name and password). That's going to be handled in my next post.

Friday, March 29, 2013

Connection Timeout Running Yum on EC2 instance with VPC

I was getting a connection error like this trying to run Yum on an Amazon AWS EC2 Redhat Linux instance in a public subnet of a VPC with a security group I had set up specifically for this machine.

http://packages.us-west-2.amazonaws.com/2012.09/main/201209eb6a01/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://packages.us-west-2.amazonaws.com/2012.09/main/201209eb6a01/x86_64/repodata/repomd.xml: (28, 'connect() timed out!') Trying other mirror.

I found that opening up outbound traffic completely for the security group for that server resolved the problem and was able to successfully download packages.

Last night I talked to some Amazon folks at an event recently that told me because the VPC is a stateful firewall it would be OK to open all outbound traffic for that server.

However... If prefer to know that you are getting your updates from a valid Amazon repo or at least an Amazon IP, you can open up your outbound traffic in your security group to the specific IPs or IP ranges for the repo(s) you are trying to connect to.

For instance, if the error message says you are trying to connect to: http://packages.us-west-2.amazonaws.com...

Open a command prompt and ping packages.us-west-2.amazonaws.com
I got IP address: 205.251.235.166

The IP for this repo could change obviously but you could set up your security group to allow outbound traffic to this IP address. If the IP for that repo changes at some point you'll get an error and have to change the IP to whatever Amazon changes the domain to point to in the future.

You can also go to Arin.org and get the complete Amazon IP range for this IP and allow traffic to all Amazon IP adddresses outbound. In this case 205.251.192.0/18

http://whois.arin.net/rest/net/NET-205-251-192-0-1/pft

NetRange 205.251.192.0 - 205.251.255.255
CIDR 205.251.192.0/18
Name AMAZON-05
Handle NET-205-251-192-0-1
Parent NET205 (NET-205-0-0-0-0)
Net Type Direct
Assignment Origin AS AS7224 AS16509 AS39111
Organization Amazon.com, Inc. (AMAZON-4)
Registration Date 2010-08-27
Last Updated 2012-03-02
Comments RESTful Link http://whois.arin.net/rest/net/NET-205-251-192-0-1

When I ping  packages.sa-east-1.amazonaws.com I get a Lacnic IP address:

177.72.244.0

You'd have to go to lacnic.org to look up that IP range:
inetnum:     177.72.240/21
aut-num:     AS53032
abuse-c:     MAAZI67
owner:       A100 ROW SERVICOS DE DADOS BRASIL LTDA
ownerid:     012.147.176/0001-50
responsible: Marla Azinger
country:     BR
owner-c:     MAAZI67
tech-c:      MAAZI67
inetrev:     177.72.240/21
nserver:     pdns1.ultradns.net 
nsstat:      20130329 AA
nslastaa:    20130329
nserver:     pdns2.ultradns.net 
nsstat:      20130329 AA
nslastaa:    20130329
nserver:     pdns3.ultradns.org 
nsstat:      20130329 AA
nslastaa:    20130329
nserver:     pdns5.ultradns.info 
nsstat:      20130329 AA
nslastaa:    20130329
nserver:     pdns6.ultradns.co.uk 
nsstat:      20130329 AA
nslastaa:    20130329
created:     20110816
changed:     20111121

nic-hdl-br:  MAAZI67
person:      Marla Azinger
e-mail:      mazinger@amazon.com
created:     20111114
changed:     20111118

Friday, April 06, 2012

SSL Cert Install Issues - Digicert SSL certs

Ok I just had to do this in February but I already forgot all these little things you have to do to get SSL certificates from Digicert working in various systems. Documenting installation process here to aid my somewhat non-functional short term memory.

login at digicert:
https://www.digicert.com/custsupport/

Go to My orders Tab

Click on Cert

Rekey cert
http://www.digicert.com/ssl-support/ssl-certificate-reissue.htm

Choose 2048 bits

Generate IIS cert on web server:
http://www.digicert.com/csr-creation-microsoft-iis-7.htm

After cert generation, close and go back into IIS or hit F5 to refresh to get the new cert to show up

Add the CSR to your cert at digicert. Wait a few minutes for it to get reissued.

Get the Certificate from digicert (download it) after has been reissued.

When you try to import - if you get ASN1 bad key -
Run digicert util to import the key instead
https://www.digicert.com/util/

Go to site bindings and edit - choose your new SSL cert - shouldn't need to restart the web server.

Test your site - SSL should be working.

Now for IMail mail server...you have to go through some hijinks to get that working with your IIS cert. Instructions here:

http://support.ipswitch.com/kb/IM-20030415-DM01.htm

With these steps IMAIL will work for general email clients like Outlook.

OK on to Java...

If you are connecting to IMail from a Java app and TLS you'll get an error stating that PKIX path building failed. In this case you need to go to the digicert site and get the IIS digicert and CA root certs and add them to the cert file you created in teh step above for IMail. You can download all three certs as a zip file when you click the download button (under the cert box). Put them in the IIS file in this order, type "reset" button. Re-enter password. Hit Apply. Restart all Imail services.

-----BEGIN CERTIFICATE-----
blah blah blah ...your cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blah blah blah ...digicert ca cert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blah blah blah ...root cert
-----END CERTIFICATE-----

At least that worked for me....

Now for the Java web server...
http://www.digicert.com/ssl-certificate-installation-java.htm

Hopefully next time will go a little smoother...unless somebody changes something.

Saturday, July 23, 2011

Error occurred during initialization of VM

Good post on resolving the error:

Error occurred during initialization of VM Unable to load dependent libraries: Can't find dependent libraries

The culprit is too many java.exe's and the wrong one being put to use.

http://geekycoder.wordpress.com/2009/07/08/java-tips-adventure-in-resolving-the-java-error-error-occurred-during-initialization-of-vm/

What I am curious about is how did the java.exe get in the Windows directory that caused this error? Not going further with this (I'm busy) but the things I did recently include upgrading to the latest version of Java which I did twice due to putting it in a directory I didn't want it in the first time around, install some software from Fluke Networks, and go to a bunch of Java programming pages on the net. While looking at one of them an error was thrown. Not sure that was related to the web site or just a coincidence.

Oh well, renamed all the java.exe's except the one I wanted so they wouldn't get run and put the path to the one I intended to use on my system path so all good it seems.

Sunday, July 10, 2011

Using JavasScript and Ant

If you hate the fact that you are doing programming like tasks with XML in Ant, consider using JavaScript if you don't want to write new custom Java classes to use as targets. I would probably opt for new targets but some people seem to prefer scripting languages so here you go:

JavaScript and Ant

In general I try to keep the configuration in the XML file separate from the executable code, which is why I would opt for custom Java class targets configured by XML, rather than try to incorporate executable code into an XML file. That's just me.

I also read you can integrate Groovy with Ant.

Sunday, July 03, 2011

Conditionally Call and Ant Target

If you want to set a target to only run if you set a particular value, easy. Just use the if attribute as follows:


<target name="whatever" if="my.value">
<echo message="target executed"/>
</target>


If you want the above target to run - set my.value to something in your property file before that target and it will run.


<property name="my.value" value="yada"/>

If you leave the property out of your file altogether the target won't run.

You can also have other tasks set values and use those values to determine if targets should run or not based on whether a file was available, etc. Details about conditional values can be found in the Ant FAQ:

Conditionally Execute Ant Target

Saturday, July 02, 2011

Ant - Copy Files That Have Changed

If you want to copy files that have changed from one directory to another use a file set with the "modified" option like this:


<copy todir="${dir.copy.to}">
<fileset dir="${dir.copy.from}">
<modified/>
</fileset>

</copy>


Ant will set up a cache.properties file which tracks file information so it knows the state of files after each build. Then when it runs a task it can refer to the cache file to determine which files have changed since the last successful build and need to be included in the fileset.

If you want the build to include a file that is getting excluded by the cache file just delete that file from the cache.properties file and it will be included in the file set the next time the ant build is run.

Tuesday, June 28, 2011

Get Ant to Call a Java Class

To get Ant to call you own Java class to do some processing you can write your own custom Ant task. Here's the info on how to do that:

http://ant.apache.org/manual/develop.html

Sunday, June 26, 2011

Running Ant from within a Java Class

Here's a good article on running Ant within another Java program.

http://www.ibm.com/developerworks/websphere/library/techarticles/0502_gawor/0502_gawor.html

You might want to do this if wrapping your ant build in a process that needs additional auditing and security, or to create a friendly user interface for running builds, for example.

Ant, Gradle, Maven, Build Software and Processes

Still pondering all my options for a build solution for a particular project but these are my thoughts going into further testing and proof of concept for a new build and deploy solution.

What build solution you choose really depends on the requirements of your particular project, security concerns and the staff you have available to support the solution.

I find Ant simple to use and manage. I like the control of what is getting into my code when and how. Because I put it there.

For all these dependency management issues addressed by Maven and some other build solutions: I think it is not difficult for experienced programmers to find and download a jar file or source code, though I think some companies and organizations providing software libraries and open source software could make this more straight forward and manage their repository of code and libraries better.

Tangent: And when getting source from the Internet not require SVN to download source (dislike). Make it easy for me to download a zip. It's a lot faster and I want a stable version that's not going to change the next time I download it. I also don't want those .svn files in my source if I don't want the software changing on me or to upload changes to your repo.

That being said large companies managing what software their developers include in production software have other concerns. These companies should have a process for requesting and downloading new libraries off the Internet to a central repository so they know what is being deployed to production systems, regardless of what build software they are using. This process should include making sure the code is from a trusted source, checking downloads with provided security keys and possibly even compiling open source libraries to make sure no rogue code has slipped in, and/or (crazy thought I know) removing questionable code in these libraries which is not needed and may introduce security problems (but let your dev's know so they don't try to use features that are not available).

You may not have these next problems with Maven, if working on a project that is small and does not have a lot of dependencies or using an internal repo at your company where Maven is not allowed to access the Internet (but you might in any case). My experience with Maven on complex projects has in general, been a lot of overhead managing things even though Maven is supposed to make things simpler. Maven's plus is that it handles dependency management by going out and grabbing all the libraries you need off the Internet or internal repo for you. Yes I know it does other stuff too. In my case, it seemed like I ended up with a bunch of compile errors to fix every time I turned on my computer. On a recent project, Maven kept updating things when it seemed like dependencies should be stable because I wasn't changing versions. This bothered me. As did having to re-fix my code every time I turned on my computer for no apparent reason. I think when you know what you're doing with the dependencies, Maven can actually make things more complicated, time consuming, and less secure. I stopped using Maven for that project and have been much more productive ever since.

Also with Maven, there are security concerns with pulling random code from the Internet (unless your company has a well managed internal repository which is well managed and secured). If you don't believe that last point, please don't work on any type of financial application or anything that stores personal private data or store anything important on the machine you dev with or let it on a network where valuable data is stored. Thanks.

I know a lot of system administrators use various scripting languages to manage various things and may not prefer an XML configuration file for managing builds. However when it comes to a build system I would prefer separation of execution from configuration. Like MVC separates the model from the view, the steps to run and files to move are in XML while the programmatic execution is in Java. Personal preference, but separation of duties can make things cleaner, easier to manage, less error prone, and more secure as we have learned from MVC.

I'd also rather work with languages I and a lot of other dev's know: Java and XML, vs. learning another new scripting language specific to a particular build software solution. That makes it hard to find employees to support that system, and less people in the organization knowing it who can help fix problems. I also don't want to have to extend (rewrite) the build system to support Java when other solutions alrady exist that meets my needs.

PS as you can see from my resume I have learned a lot of new languages besides Java over the years. This is why consideration for when a new language is really needed and the pros and cons of each is important to me: WebDatabaseProgrammer.com

As for the need to customize, if you're looking for something Ant can't do, I don't think it's that difficult for an experienced Java programmer to figure out how to add new classes that can perform tasks under the Ant framework, or customize Ant itself. I have customized open source code that didn't do exactly what I wanted. It's not that tough. Download the source, grab the dependencies and edit the Java as needed. Just like any other project. If you intend to get updates however, keep your custom code separate from the open source library you downloaded, using and appropriate design pattern.

It seems to me that using a universal language that many developers know should cover whatever the build system needs to do. Java + XML can do pretty much anything you can imagine. I think rare would be the project where Java couldn't get the job done well enough. Of course adding new Java tasks requires compiling code but it also adds some control to what your build system is allowed to do and universal logging can ensure you can audit what your build system is doing, regardless of who edits the XML file. You can add a layer of security also based on who can edit the Java classes and who can edit the XML.

I actually like that variables are immutable once the Ant build process starts. This ensures your variables aren't changing mid stream and avoids tricky errors related to variables changing in random places during the build. XSL transformation works the same way and it took a bit of getting used to, but now I like it because once you get to the point of transforming, or building, certain things should be set.

In general, your build system is probably one of the most important places you don't want to take security risks. That's where you can audit everything and make sure nothing unintended gets into your production software. Whatever build solution you choose, I recommend making sure your processes for updating your builds are secure and audited, including database updates, compilation, and deploying all types of files. It doesn't matter how much development and testing you do on the source code if someone can alter it when it gets deployed and after everyone's done scrutinizing it. Make sure this process is secure.

Sunday, April 03, 2011

Eclipse: Selection Does Not Contain A Main Type

If you are having a problem launching your project as a Java Application in Eclipse due to the error: "Selection does not contain a main type" a simple work around is to simply create a new Java project and copy your class files into the source folder of the new project.

You can also create a new Java project in Eclipse and change the build path to point to your existing source folders. Right click on the project and select "Java Build Path". Click on the "Source" tab. Remove any folders you don't want and point to your existing source folders.

Then your Eclipse Java project configuration should be accurate (without having to figure out the problem with it - which is the other option) and you should be able to right click on a class file with a main method and run it.

Related posts:

http://dev.eclipse.org/newslists/news.eclipse.newcomer/msg04471.html

http://stackoverflow.com/questions/4252472/java-launch-error-selection-does-not-contain-a-main-type

Sunday, March 27, 2011

SSL Certificates for Java Web Servers

Digicert makes it pretty simple to get SSL certificates for Java web servers. I like.

They have a tool to generate the command line code you need to enter to generate your certificate request here:

https://www.digicert.com/easy-csr/keytool.htm

Just enter the appropriate information. Open a command prompt window on the server on which you intend to install the certificate. Navigate to the folder where you want to store the certificate files. Copy and paste the command into the command prompt window.

A file will be created for the keystore, and for the CSR (certificate request). Copy and paste the certificate request (in the .csr file that gets created) into the box on the order form where it asks for that information.

Once your certificate is approved, download and copy it to the folder you created above. Run the command on this page but replace the domain name with the domain name for which you ordered the certificate:

http://www.digicert.com/ssl-certificate-installation-java.htm

Friday, January 14, 2011

Syntax and Result Differences - Sybase and SQL Server

SQL Server is based off Sybase. They both use Transact SQL. However I've found some differences in behavior (at least in the version of Sybase and driver I'm using).

You cannot do a count in a select in sybase without a group by and get accurate results.

COUNT without GROUP BY:

Select count(some_field) from whatever_table will return every row in the table. That select statement works in SQL Server without the group by. In Sybase you'll need to add group by like this:

select count(some_field) from whatever_table group by (some_field)

COLUMNS and GROUP BY:

If you don't have all the columns you are selecting in the group by you may get incorrect results but no error. In SQL this will give you an error like this:

Column 'some_field' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.

In Sybase something like this will just give you inaccurate results:

select some_field, other_field
from whatever_table
group by other_field

EMPTY STRING, LTRIM, LEN

- insert

If you try to insert an empty string using an insert statement with Sybase it inserts a 1 character space instead of an empty string. Length will equal 1.

insert into some_table (id, some_field) values(23, '')
select len(some_field) from some_table where id = 23 (this will give you length of 1)

In SQL Server the above statements would 1.) insert a true empty string and 2.) return a length of 0.

- ltrim and length

In Sybase ltrim(some_field) above would give you a null and len(ltrim(some_field)) will give you null - no matter how many spaces you have in some_field.

In SQL Server the same scenario would give you an empty string after trim and 0 for the length.

Saturday, December 25, 2010

Finding Memory Leaks in Java Applications

Trying this out to analyze memory usage and find leaks in Java application:

http://blog.emptyway.com/2007/04/02/finding-memory-leaks-in-java-apps/

Pretty slick...

In Eclipse for my debug configuration I put this in the vm arguments box:

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9000
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-agentlib:hprof=heap=dump,file=/tmp/hprof.bin,
format=b,depth=10

A few more details for those not so familiar with Java:

jconsole.exe is found in the /bin directory in the directory where you installed the Java JDK. It is an application with a GUI so just double click on it to open it up. You then select your application from the list to view details about it.

jps.exe is run from command prompt. It is also found in the /bin directory in the directory where you installed the Java JDK. Go to a command prompt (start menu, run, enter "cmd" hit enter) and navigate to that bin folder or alternatively you can type out the full path to jps.exe to run it.

jmap.exe is also in the /bin directory. You also would run that from the command line by typing in the location where you want to write the file and the process id as described in the above article.

jmap -dump:format=b,file=/tmp/java_app-heap.bin 15976

Once that file is created,run jhat as directed.

jhat -J-Xmx326m /tmp/java_app-heap.bin

Then in browser go to:

http://localhost:7000

You can browse that data to find which objects are most in use, how much memory they are taking up, etc.

If a very slow leak can run the app in production using the above to output file over time and then view it.

Seems like support for Eclipse Memory Analyzer and Eclipse TPTP are dwindling but this method above is pretty simple.