Friday, May 19, 2017

No such file or directory #include <openssl/opensslv.h>

Similar to my last post on the include pyconfig.h missing on AWS EC2 instances, when attempting to run this command in a virtual environment to create a lambda install package:

pip install cryptography

The next error is:

build/temp.linux-x86_64-3.5/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
     #include <openssl/opensslv.h>
                                  ^
    compilation terminated.
    error: command 'gcc' failed with exit status 1

Once again run the yum list command to find the correct library. In this case the name is not quite so apparent. I found a number of libraries using this command:

sudo yum list | grep openssl

Such as the following:

openssl.x86_64                        1:1.0.1k-15.99.amzn1          installed   
apr-util-openssl.x86_64               1.4.1-4.17.amzn1              amzn-main   
krb5-pkinit-openssl.x86_64            1.14.1-27.41.amzn1            amzn-main   
openssl.i686                          1:1.0.1k-15.99.amzn1          amzn-main   
openssl-devel.x86_64                  1:1.0.1k-15.99.amzn1          amzn-main   
openssl-perl.x86_64                   1:1.0.1k-15.99.amzn1          amzn-main   
openssl-static.x86_64                 1:1.0.1k-15.99.amzn1          amzn-main   
openssl097a.i686                      0.9.7a-12.1.9.amzn1           amzn-main   
openssl097a.x86_64                    0.9.7a-12.1.9.amzn1           amzn-main   
openssl098e.i686                      0.9.8e-29.19.amzn1            amzn-main   
openssl098e.x86_64                    0.9.8e-29.19.amzn1            amzn-main   
xmlsec1-openssl.i686                  1.2.20-5.3.amzn1              amzn-main   
xmlsec1-openssl.x86_64                1.2.20-5.3.amzn1              amzn-main   
xmlsec1-openssl-devel.x86_64          1.2.20-5.3.amzn1              amzn-main 

Looks like we want this one based on a little research: openssl-devel.x86_64

sudo yum install openssl-devel.x86_64 

Yep, that seems to do the trick. See my next post for complete list of commands to install the python SSH library paramiko which requires the cryptography library on an AWS EC2 instance in a virtual environment.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html


http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

No such file or directory include <pyconfig.h>


If you get an error that looks like this when trying to run pip install (such as pip install cryptography or pip install paramiko) on AWS EC2 instance:

build/temp.linux-x86_64-3.5/_openssl.c:12:24: fatal error: pyconfig.h: No such file or directory
  include <pyconfig.h>
  compilation terminated.
  error: command 'gcc' failed with exit status 1

...then you need to install the python development tools. Many blog posts explain this with answers like this for python 2 or python 3:

install python-dev

install python3-dev

On AWS however the libraries have different names. First run this command to list the available libraries that can be installed:

sudo yum list python3 | grep python3

In my case, I see that the library I need on this particular instance is not python3-dev but rather python35-devel.x86_64, which means to get this library I will instead run this command:

sudo install python3-dev python35-devel.x86_64

Note that you will need to run the version of library that is compatible with the version of python you are using.


---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html


http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

unable to execute 'gcc': No such file or directory error: command 'gcc' failed with exit status 1

If you receive this error trying to run an installation script:

 unable to execute 'gcc': No such file or directory
 error: command 'gcc' failed with exit status 1

Install gcc for compiling C code

sudo yum install gcc

Note however that it is not recommended to run this on proaction systems. Only run on development systems where code needs to be compiled and on the systems that are used to build and deploy software in a very well controlled and audited environment. If you leave this running on production systems anyone that gets onto the machine can write or download code to the machine and compile it. This poses an additional attack vector.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html

http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

Thursday, May 18, 2017

Permission denied (publickey). or Timeout trying to SSH to an AWS EC2 Instance

If you are trying to SSH into an AWS EC2 instance and having problems here are some things to check. Although screen shots are specific to AWS the same principles apply to SSH problems on other networks as well.

Permission denied (publickey).

Make sure you are using the correct EC2 key that was assigned to the instance or created when the instance was launched. You should have downloaded this key to your local machine. The key name is listed on the summary page for the EC2 instance:



Ensure that you have not changed the contents of the file in any way. Renaming it should be fine.

Change the permission of the key file so it is read only for the creator of the key by running this command: 

chmod 400 your_key_file.pem

After you run this command you can type this command to verify the permissions of your file: 

ls -al

If set correctly the permissions will look like this:

-rw-------@  1 tradichel  1426935984  1692 May 18 21:00 your_key_file.pem 

Make sure you have navigated to the directory where the key file is located or are using the correct path to the key in your ssh command:

ssh -i your_key_file.pem ec2-user@54.191.224.43

Make sure you have included the user name in your ssh command. The default username for an AWS linux instance is: ec2-user

ssh -i your_key_file.pem ec2-user@54.191.224.43

Check that you are using the correct public IP address


If you have connected repeatedly to the same IP address that has used different EC2 keys over time, you may need to delete the existing key for the IP address from your known hosts file. You will see the location of your known hosts file if you run the ssh command with -vvv (verbose):

ssh -vvv -i your_key_file.pem ec2-user@54.191.224.43

The known hosts file location will look like this on a mac:

debug1: Found key in /Users/username/.ssh/known_hosts:2

You can simply delete the entire file or the offending entry.

Timeout

Make sure you have the following network configuration which will allow SSH traffic to reach your instance on port 22 and send responses back to the SSH client on ephemeral ports:
Random Failures with Active Directory and SSH

If you are using Active Directory as a means of connecting to an EC2 instance, there are a myriad of issues that may be occurring, often related to network ports. Active Directory requires a number of ports to work correctly and will differ depending on configuration. AD can dynamically determine which address to use based on DNS settings. If you find connections randomly failing likely there is something wrong with the network rules and some of the IP addresses in the corresponding DNS rules have been left out. When the connection works the connection randomly picked an address that has the rules set up properly. When the connection fails, the address randomly picked an address that was not set up properly. Additionally backhauling connections to a data center there may result in latency or other network problems along the way that cause failures. There are a variety of ways to architect Active Directory logins to overcome these problems, but in general, check that ALL the required addresses are allowed in your networking rules not just a subset. 

Wednesday, May 17, 2017

Manual AWS Console Updates When Using CloudFormation

Manual vs. CloudFormation Updates

Consider the following scenario:
  1. A DevOps person runs a CloudFormation template to create a stack. Let's say it's a network stack that defines all the traffic that can go in and out of your network.
  2. A nefarious or ill-advised person logs into the AWS console and manually changes the networking rules and opens ports to allow evil traffic. For example perhaps the person creates rules that open ports for WanaCryptor intentionally or unintentionally (though hopefully no one is running SMBv1 on AWS!)
  3. DevOps person re-runs the networking stack via CloudFormation to restore the network to the desired state.
Does this work?

No.

How Does CloudFormation Know When To Make a Change?

CloudFormation seems to only know about the changes it made and differences in the template that it is running vs. the last template it ran. CloudFormation will compare the old unchanged template and new template and go "Cool, everything's all good. Move on".

I just manually changed an S3 endpoint policy, bucket policies and IAM policies in the console and then re-ran the original CloudFormation stacks and my policy changes remained intact.

How Does This Impact Security of Your AWS Account?

If you have a critical, security related stack and you want to maintain that stack in a secure state, you should structure your account to ONLY allow CloudFormation (if that is your tool of choice) and write security policies to allow the appropriate people to only update these stacks through your well-audited deployment mechanism. You might also be able to use Config rules and other event triggers but that seems more complicated and error prone than a straight-forward process of locking down how things are deployed in your account. If you only find out about a problem AFTER it happened and then fix it, might be too late. I explain this in more detail in this white paper on Event Driven Security Automation.

How Can Manual Problems Be Fixed?

In order to fix this problem a change can be made to the template that forces an update. In the case of my policies, I can alter the policies in my template to force them to be updated, for example by changing the SID. Deleting something from a template, running the template, then recreating can work in some cases. Manually deleting things created outside CloudFormation in the console is an option. However, deleting resources is not an option when you have existing systems running that cannot be taken offline that are using the network you are trying to delete. In fact, if you try to do this your CloudFormation stack may end up in a state that is difficult, if not impossible, to fix - though some new features have been added to CloudFormation to override certain problems. You could create a new network and move resources so the new network so you can delete the old one, but that also can be very complicated.

Recommendation for Deploying Security Controls on AWS

For this reason...I highly recommend that if you use CloudFormation, for critical stacks such as security appliances and networking, make that the only option for deployment and create appropriate stack policies so those stacks cannot be altered to an undesirable state. In fact, I would recommend that in production accounts, only automated processes should be used for deployments. In QA, using only automated deployments ensures your testing is accurate. Using only automated mechanisms in your development environment will ensure your automation works. If you MUST provide manual access, create a sandbox for testing and clicking buttons. You could also find something other than CloudFormation to automate and control how resources are deployed. CloudFormation is not the only way to do this, however it offers a lot of built in security benefits.

CloudFormation Can Improve Security In Your AWS Account, When Used Properly

Tuesday, May 16, 2017

Accessing Files in S3 via a Lambda Function in a VPC using an S3 Endpoint

This post explores creation of a lambda function inside a VPC that retrieves a file from an S3 bucket over an S3 endpoint. The Lambda function below is written in Python. 

Why Use a VPC S3 Endpoint? 

Traffic to a VPC Endpoint creates a private connection between the specified VPC and AWS service. By creating the appropriate policies on our bucket and the role used by our Lambda function, we can enforce any requests for files in the bucket from the Lambda function to use the S3 endpoint and remain within the Amazon network. If we only allow GetObject via the endpoint, any requests or files must come from within our VPC. Putting a policy on the VPC endpoint we can limit what S3 actions the Lambda role can take on our bucket over the network. By putting additional restrictions on the bucket policy we can limit who can upload to the bucket, enforce MFA and specific IP addresses. All these things work together to protect the data in the bucket. Of course you have to remember that anyone who has permissions to change the policies would be able to remove these restrictions and get to the data in the bucket, so give permissions to change permissions sparingly and consider segregation of duties.

For a more detailed explanation of how data flows via an S3 endpoint see this post:


CloudFormation Templates

The following resources need to be created before we can write a lambda and run our test:

VPC: 

Subnet and Security group for Lambda:

S3 bucket: 

Lambda Role:

S3 Bucket Policy that allows our Lambda role to access the bucket:

And S3 Endpoint and Policy that grants access to bucket via route in subnet route table and S3 Endpoint Policy:

Lambda Function:

Resources:

The S3 Endpoint was created and assigned to the desired route table. The Route Tables tab displays the reference to the subnet and route table where the route has been added for the S3 VPC Endpoint.


A look at the route table for the subnet shows the route listed for the S3 endpoint. The route table has a route that allows access to S3 via a Prefix List. 

Wait...what's a Prefix List? A service (S3) in a VPC endpoint is identified by a prefix list—the name and ID of a service for a region. A prefix list ID uses the form pl-xxxxxxx and that ID needs to be added to the outbound rules of the security group to allow resources in that security group to access the service (in this case S3 in the Oregon or us-west-2 region). Basically it appears to allow traffic to be routed to that service within the AWS network.



A security group was created and an an egress (outbound) rule was added to it that allows access to S3 via a Prefix List. 



The Lambda function shows that it has been created inside a VPC, using our subnet and specified security group. You'll notice one thing that's a bit odd - the destination for our S3 VPC Endpoint rule is blank. But will it still work?


Lambda Python Code

The Lambda Python code should allow retrieving a file from the bucket - in my case retrieving a key for use in administration purposes:

from __future__ import print_function
import boto3
import os
import subprocess

def configure_firebox(event, context):
    
    s3=boto3.client('s3')
    
    bucket=os.environ['Bucket']
    key="firebox-cli-ec2-key.pem"
    
    response = s3.get_object(Bucket=bucket, Key=key) 
   ...


Success

If you have successfully created your networking and policies, you will be able to access the files in your bucket over the S3 endpoint (and only over the S3 endpoint if you desire). In fact you can restrict access to the bucket to the S3 endpoint only using the following policy:

{
   "Version": "2012-10-17",
   "Id": "Policy1415115909152",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPCE-only",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::examplebucket",
                    "arn:aws:s3:::examplebucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:sourceVpce": "vpce-1a2b3c4d"
         }
       },
       "Principal": "*"
     }
   ]
}


Time Out

If you get a timeout error, likely one of the networking rules is not set up correctly. 

  • Make sure the route is in the route table associated with the subnet used by the Lambda function.
  • Make sure the Security Group used by the Lambda function has the Prefix List
  • Make sure the Lambda function is assigned to the correct subnet and security group that show the rules above.
  • Make sure the S3 endpoint policy allows access to the bucket by the Lambda role. I learned from AWS that you cannot use a role in an S3 bucket policy, just as you cannot use a role in an S3 bucket policy. It seems logical that you should be able to and this stumps a lot of people who simply change the arn to the role arn. Sorry ...doesn't work! You can limit the actions that can be taken on the bucket however such as GetObject, PutObject, DeleteObject.

Access Denied

If this error occurs:
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied: ClientError
Check the following:
  • The bucket policy allows s3:Get on the individual keys (files): "arn:aws:s3:::MyExampleBucket/*"
  • The bucket policy allows s3:ListBucket for the Lambda role on the bucket itself: "arn:aws:s3:::MyExampleBucket"
  • The Lambda role allows access to the S3 bucket.
  • Make sure the file name and the bucket name are correct.
  • Make sure you have a principal in your S3 Endpoint Policy. CloudFormation allows creating a policy without a principal at the time of this writing and results in this error.
Timeout Errors 

There seems to be an issue around 8:30 p.m. PST right now which AWS is working to fix and will likely be resolved soon. See this p post: http://websitenotebook.blogspot.com/2017/07/timeout-connecting-to-s3-endpoint-from.html

Monday, May 15, 2017

errorMessage: Bad handler

When trying to create an AWS Lambda function if you get this error message:

errorMessage: Bad handler 

Make sure when specifying the handler you use the format [file name].[function name].

For example if you have the function configure_firebox and the python file name is fireboxconfig.py then specify the handler as fireboxconfig.configure_firebox

If you are specifying the name correctly and still getting the error, take a look at your zip file and make sure the python file you are referencing is at the root of the zip file, not in any folders when the code is unzipped.

If you are trying to use the zip function to zip a file into the archive without the directories use the -j switch:

zip -j ./resources/firebox-lambda/fireboxconfig.zip ./resources/firebox-lambda/python/fireboxconfig.py 


Also check for any spelling errors in your file or handler name.

In CloudFormation:

FireboxConfigurationLambda:
    Type: "AWS::Lambda::Function"
    Properties: 
      Code:
        S3Bucket: !ImportValue FireboxPrivateBucket
        S3Key: fireboxconfig.zip
      Description: Firebox Lambda to Execute CLI Commands
      Environment:
        Variables:
          Test: Value
      FunctionName: ConfigureFirebox
      Handler: fireboxconfig.configure_firebox
      KmsKeyArn: !ImportValue FireboxKmsKeyArn
      MemorySize: 128
      Role: !ImportValue FireboxLambdaCLIRoleArn
      Runtime: python3.6
      Timeout: 3
      VpcConfig:
        SecurityGroupIds:
          - !ImportValue FireboxManagementSecurityGroup
        SubnetIds:
          - !ImportValue FireboxCLISubnet

More:

http://docs.aws.amazon.com/lambda/latest/dg/python-programming-model-handler-types.html

https://github.com/tradichel/FireboxCloudAutomation/blob/master/code/resources/firebox-lambda/lambda.yaml

The provided execution role does not have permissions to call CreateNetworkInterface on EC2

If you get this error when attempting to create an AWS Lamba function:

The provided execution role does not have permissions to call CreateNetworkInterface on EC2

You need to grant Lambda some additional permissions:

ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface

There is an existing managed policy provided by AWS which has all the permissions required by a lambda function named AWSLambdaVPCAccessExecutionRole.

You can attach a managed policy to a role using CloudFormation by using the ManagedPolicyArns Property of an IAM role.

Type: "AWS::IAM::Role"
Properties: 
  AssumeRolePolicyDocument:
    JSON object
  ManagedPolicyArns:
    - String
  Path: String
  Policies:
    - Policies
  RoleName: String

For example:

FireboxRole: 
    Type: "AWS::IAM::Role"
    Properties: 
      RoleName: "FireboxLambdaRole"
      AssumeRolePolicyDocument: 
        Version: "2012-10-17"
        Statement: 
          - 
            Effect: "Allow"
            Principal: 
              Service: 
                - "lambda.amazonaws.com"
            Action: 
              - "sts:AssumeRole"
      Path: "/"
      ManagedPolicyArns: 
        - "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"

For more information see: 

http://docs.aws.amazon.com/lambda/latest/dg/vpc.html

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html

https://github.com/tradichel/FireboxCloudAutomation/blob/master/code/resources/firebox-cli/clirole.yaml



Saturday, February 06, 2016

Update Java to a Different Version on Fedora

Update Java 1.7 to 1.8 developer version on Fedora:

1. Install Java 8 (if you want to stick with openjdk developer version use this):

su -c "yum install java-1.8.0-openjdk-devel"

2. Run this command to view versions on your machine:


sudo update-alternatives --config java

3. Choose the number next to the new Java version you installed.

4. Run java -version to verify the correct version is the version referenced by the system.

Sunday, October 11, 2015

Dev VM for AWS and Github on Mac with VMWare Fusion

Why?

Have you ever been working on a project and had something go so awry with your environment than you had to rebuild your machine?

Perhaps your dilemma was not so extreme, but something happened where you wished you could roll back your code and it was not just your code changes you wanted to roll back.

Although we cannot count 100% on a VM protecting our base machine, a VM does provide some level of protection when downloading 3rd party and open source software for testing.

Perhaps you want a different VM set up for different client projects.

If you don't want software going out to the Internet once installed, you can lock down your VM to only be accessible from your host, and restrict access to the Internet at large.

Do you want to give developers a VM that has all the tools they need pre-configured to save time getting people up and running?

Basically...a VM allows us to set up an environment and create clones of that environment as needed so we don't have to start over from scratch. You can also take snapshots as you develop so can roll back your entire VM as needed if you are playing around with changes to the VM configuration itself.


Why not?

Running in a VM may slow down your development to some degree vs. running directly on the host machine.

VMs may not always have access to the underlying resources for testing and profiling code accurately - however some settings can be changed. The solution here may be development on one machine, testing and code profiling on another.

There are probably other good reasons why you might not want a developer VM but overall they are pretty handy.


VMWare Fusion on a Mac running Fedora VM + Eclipse

For this example I'm going to set up a Fedora Linux VM on a Mac using VMWare Fusion with Eclipse. Below are the steps, some gotchas, and some security tips. VMWare Fusion is the Mac Version of VMWare's Windows products. You could follow similar steps if using Windows version and you could install other tools that you prefer using this same approach. 


Good to Know...

Terminal - To get to the terminal window in Fedora Linux click on “Activities” and type “Terminal” in the search box.

New terminal window - crtl-shift-t to get new terminal window in new tab

Browser - To get to a browser type on “Activities” then the Firefox/Mozilla icon.

sudo - For any commands below you may have to add “sudo” or execute them as root if that is not specified. By default the user I created was not in the sudoers file so I add that user below. 

Escaping the VM - The apple command button (with the flowery icon) and tab at the same time gets you out of the VM window if you seem to be stuck in it.

Customized hard disk & memory - I had problems changing VM settings (hard disk, memory) after the fact. Best to set them up front if possible.

Take snapshots! -  Take snapshots as you go so you can revert to a known good snapshot if needed or start over from a particular snapshot. If you revert to a snapshot, take another snapshot because you will lose the snapshot you revert to when you use it.

Networking -- The biggest difference using VMWare Fusion on Macs that I have found vs. Windows VMWare products has been networking, which seems to be temperamental.  I have to fiddle with the Network Adaptors at times as explained below. On Windows it is easier to select and basically hard code the network adaptor you want to use in bridged mode which usually resolves network problems.

Security -- A VM is just a file. VMS have been hacked by altering the VM file on a machine. If you are storing sensitive data on a VM protect it appropriately. Protect your VM files appropriately. Also make sure your networking is set up appropriately depending on your security rules. If you are on a VM your connections may or may not be going over your VPN, if you have one, depending on how you have things configured. Make sure you understand what's going on with your networking. Do a checksum on any files you download to ensure they are not corrupt or have been tampered with in transit.

Checksums - 

If you are on Windows you can do integrity checks for downloaded files as explained here:

To do an integrity check on linux you can use built in commands such as sha1sum, or if sha 512 use:

sha512sum [file to check]

The result will be a checksum that should match the checksum provided on the site where you downloaded the file. If it doesn't the file was corrupted in transit, or worse, altered by someone with likely evil intentions.

If you use yum to do updates or other package installers they likely have this checksum build into their install process.


Step 1: Create an ISO

An iso is a file that contains an image that can be used to create a new virtual machine. In our case we are creating an iso for Fedora Linux.

1. Go to FedoraProject.org
2. Click download
3. Click on Formats in sub menu
4. Click on 64bit DVD option
5. Download starts immediately
6. Check the integrity of the file you downloaded
7. Right click and burn to blank CD or your hard drive

You should now have a .iso file you can use to create a Linux VM


Step 2: Create a Fedora Linux VM, customized for Eclipse

1. Start Fusion. 
2. File, new, choose appropriate Linux options
3. On the top of vm screen click the icon with cd coming out of drive.*
4. Choose disk option and select the file burned to CD
5. Click on customize settings when going through the VM set up
6. Change the name of my VM
7. Click on hard disk. Change the size to the amount of storage you think you will need for this VM.
8. Hit the back arrow, apply.
9. Click on memory. I doubled mine to 2048. What number you use depends on how much memory you have on your machine and what else you are running outside the VM that will be using memory. When I didn’t increase the memory eclipse was going very slowly and using swap memory.
10. Close the settings.
11. Run your VM (big > arrow in the middle of the screen).
12. Wait…this will take a while…
13. Try to login. 

If you get an pam module error when logging in that says “system is booting up” and won’t seem to go away then restart the VM.


Step 3: Check Internet Connection

1. Make sure you can connect to internet - open a browser to test going to your favorite web site. 

2. If you have problems try changing the network adaptor settings as follows:
  • Shut down your VM [Menu: Virtual Machine > Shutdown]
  • Go to Menu: Virtual Machine > Network Adaptor > Network Adapter settings
  • Click the button that says: Add Device [Your VM must be stopped]
  • You’ll now see at the top of the dialog that it says “Network Adapter 2"
  • Click on the bridged option for Wi-Fi (or whatever your current network is that you are using) instead of “Share with my Mac”
  • Close the dialog box
  • Start your VM (click the > arrow in the middle of the VM screen or use the menu options
  • From menu choose: Virtual Machine > Network Adapter > Disconnect Network Adapter
  • Verify that Network Adapter 2 is connected on menu: Virtual Machine > Network Adapter > [you should see the option for Bridged (Wi-Fi) selected and the option to disconnect (leave it connected)
  • Restart your VM
3. You can try removing and adding Network Adapters or switching between Nat and Bridged mode. I have not experienced consistent behavior in these settings.


Step 4: Add Your User to Sudoers

1. Add user to sudoers file using visudo so you don’t have to keep logging in as root with su.

su root 
[enter password]

visudo  

uncomment this line if not already uncommented

%wheel  ALL=(ALL)       ALL

Exit and save changes or just exit if nothing to change

:wq! 

Add the users to the wheel group you want to allow to run sudo

usermod -aG wheel [username here]

su back to your user name
su [your user name]
[enter psassword]

test that you can sudo with that user now
ls /root
(you should get permissions denied)

sudo ls /root
[enter password]

should not give any error.

More: 


Step 5: Run Security Updates



Security updates will patch known CVEs (security flaws):



sudo yum update —security

Then y and enter to install.

Note: if you have problems running this because it says PackageKit is running, here’s what I did. I don’t know if this is bad but it solved the problem:

ctrl-shift-t to open a new terminal window in new tab

ps -ax | grep Package

Got the process id, say 112233

Then kill that process which in this case would be:

sudo kill 112233

Did same for RSS

Then my security update would run.

The next problem I had during the update was report of two conflicting packages. There were two conflicting package names in the error message. I just updated the latter package stand alone:

sudo yum update abrt-java-connector-1.0.6-1.fc20.x86_64

Now run the security update again

sudo yum update --security


Step 6: This is a Good Time to Take a Snapshot

Menu: Virtual Machine > Snapshots > Take Snapshot

You can view your snapshot 

Menu: Virtual Machine > Snapshots 

You might want to right click on Get Info and add a comment like “security updates applied"


Step 7a: Install Eclipse with Yum -- READ CAVEAT FIRST

I just typed:

sudo yum install eclipse

It proceeded to install a gazillion libraries…well really it was only 100.

Then I just typed eclipse at the command line. I checked the version and it installed the “Kepler” version of eclipse, however the latest version of Eclipse (at time of this writing) is Mars. If you want to get the latest version you’ll want to download and install from web site.

So scratch that and roll back to the secure VM snapshot (and take a new snapshot)


Step 7b: Install Eclipse From Eclipse Web Site

1. Search for "eclipse" in Google
2. Click eclipse downloads in Google results
3. Click on SHA-512 to get the hash which looks like this currently:

b5fe908c9ae4ec2c1e050bca1846b07f0474d3c6abb77ec71ebbc2d71ab89ce3934b6019cb4d700386a2236f28a2ab04ca1976a48b82220ba8563cd9b672b840 eclipse-inst-linux64.tar.gz

4. Choose a mirror close to you or from a name you trust
5. Download box pops up - click OK or choose Save File
6. The file eclipse-inst-linux64.tar.gz is downloaded.
7. The file went to /var/tmp if you just clicked ok or to your downloads folder if you chose Save File. If you can't find it run this command:

sudo find / eclipse-inst-linux64.tar.gz | grep eclipse-inst-linux64.tar.gz

8. cd to the directory where the file is located

cd /var/tmp [or wherever the file is saved if not /var/tmp]

9. Run the checksum integrity check:

sha512sum eclipse-inst-linux64.tar.gz

If the file is OK the output matches the above. This of course assumes the above was not tampered with but better than doing no integrity check at all.

b5fe908c9ae4ec2c1e050bca1846b07f0474d3c6abb77ec71ebbc2d71ab89ce3934b6019cb4d700386a2236f28a2ab04ca1976a48b82220ba8563cd9b672b840  eclipse-inst-linux64.tar.gz

Since we have a match my download seems to be OK.

10. If you just hit ok the archive manager should be open. If not just double click on the downloaded file to open it.
11. Click Extract
12. The next screen gives option to create a new folder, which I did. 
13. Extract files to folder of choice.
14. I scanned the readme file for any trouble shooting tips and saw a particular version of Java is required.
15. Check your java version to make sure it is ok:

java -version

16. If java is not ok you can update manually or run:

sudo yum update java

17. Double click on the eclipse installer file (eclipse-inst) in the root directory of extracted files
18. Choose the version of Eclipse you want - I just chose the first option for basic Java IDE
19. Click install and follow the prompts.
20. Go get a cup of coffee or do some jumping jacks while you wait.
21. Click launch.
22. Change the workspace if you want - it's just the location where Eclipse stores your preferred settings. You can have multiple workspaces with different settings. I generally store settings and projects in different folders not in the same hierarchy.
23. If you get an error Eclipse is not responding, click "Wait".
24. Yay. Eclipse is running. Check the version under Help menu and make sure it is the latest.


Step 8: Take a Snapshot!

This would be a most excellent time to take a snapshot.


Step 9: Install Git

If you are using GitHub you might want to install GIT and/or any Eclipse tools for Git. You'll want to back up your source and version as frequently as possible so as not to lose your work! This allows rollback of only your code vs. rolling back your entire environment with a VM snapshot.

https://eclipse.github.io


Step 10: Install AWS Goodies

Install your favorite AWS tools from this web page:


Perhaps you have different VMs and configurations for different projects or customers, or perhaps you have a single VM that supports them all...the choice is yours!





Sunday, February 22, 2015

Windows Notes

Windows commands

http://commandwindows.com/windows-8-commands.htm

http://ss64.com/nt/

http://www.robvanderwoude.com/ntadmincommands.php

Net Use - mapping drives, printers, manage users

Net user
http://support.microsoft.com/kb/251394

Net session
https://technet.microsoft.com/en-us/library/bb490711.aspx

http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/net_use.mspx?mfr=true

Netsh commands http://commandwindows.com/netsh.htm

Netsh for Windows firewall
https://technet.microsoft.com/en-us/library/cc771920%28v=ws.10%29.aspx

Advanced Firewall
http://support.microsoft.com/kb/947709

Windows find from command line
http://www.howtogeek.com/206097/how-to-use-find-from-the-windows-command-prompt/

Systeminfo - display lots of stuff including Windows Domain
https://technet.microsoft.com/en-us/library/bb491007.aspx

Managing services from command line

http://commandwindows.com/sc.htm

Create Windows tasks from command line 
https://technet.microsoft.com/en-us/library/cc772785(v=ws.10).aspx