Create a GitHub Respository and Get Code In it

Quick way to get your code in GitHub

These steps work on Windows.

1. Create an account on

2. Recommend multi-factor authentication to require you to enter a pin from your cell phone

     a.  Click on the settings icon :
    b. Click on security on the left

    c. Turn on two factor authentication at the top middle of the screen and follow the instructions to activate.

3. Install the client software. 

4. Create a repository on
    a. Click on repositories in the left menu

    b. Click the plus button (next to the settings button above)
    c. Choose New Repository from the download

    d. Type a name for your repository
    e. Choose public if you want the world to see your code, otherwise private radio button.

    f. Click create repository.

4. Create the local repository
    a. On the repository page, click set up in desktop.

    b.  Click "launch application"

    c.  Create a new folder where you want to have the folder for your repository created.

    d. Click to select the location

    e. A folder will be created with the name of your repo inside the folder you selected and a .git folder
    f. Do not alter the git folder - that is used to sync your files with git

    g. Move the files you want to put into github into the repository folder.

5. Ignore things you don't want to go to github

    a. in the root folder of your repository create a file called .gitignore 

    b. in windows right click on the repository folder and choose Git Bash

    c. type on command line: touch .gitignore
    d. type the names of files you want to exclude from going to github

    e. suggest including: encryption keys, logins, AWS keys, other types of credentials, sensitive data

6. Create a Branch (if you want)

   a. Go back to your repository on github

    b. click the down arrow next to master at the top

    c. enter a new branch name

7. Add files to Github in the branch you just created
    a. Check the box next to files you want to move to git in your file list

    b. Enter a summary (required) and comment (optional)

    c. Click "Commit to [branch]"

    d. The files are now listed under "Unsynced changes"

    e. Click Publish in top right of the screen to move the files to the server.


- if you have existing attempts to put into github and try to reuse you will probably have conflicts. Make sure to remove the .git directory, if any, from existing code before moving to your new repository directory

- can't create a file staring with . in Windows so use the Git Bash (not command window) as noted above to create the file.

- You cannot create a repository and sync it using the method above if there are existing files where you are trying to create the repository - it's just easier to start with a new directory and move code into it if having issues.

I'm sure there are many other ways to do this - this is what worked quickly for me.

iPhones, iPads Won't Connect to NETGEAR Wireless Router

Been having issues for a while where Apple devices - Macintosh computers, iPhones, iPads won't connect to my Wireless NETGEAR Router.

It seemed that "forget this network" on the iPhone or iPad and re-establishing or releasing and renewing the IP on a Macintosh computer did the trick.

To me this was not a big deal. I had other priorities. Some of those who visited my humble abode, however, found this quite annoying and made quite a ruckus about it - to the point where I spent a whole week end testing things out and searching the web. (I didn't really need the whole weekend - it was an excuse to try out some new network equipment which was fun for a nerd.)

Finally found the culprit. The last update to the NETGEAR firmware has a bug.
To fix you can downgrade the firmware on your router.
Before you start login to your router and see figure out the current firmware for your router. Typically you can login at For my model the firmware version is on the advanced tab.

1. Go to
2. Click support at bottom of page
3. Click on for home
4. Click on Browse Products
5. Click on Routers, Modems & Gateways
6. Click on wireless routers & gateway modems
7. Choose your model.
8. Click Select
9. Under Firmware/Software your version likely matches the latest version. Click on Get More Downloads.
10. Scroll down and select one version prior than the one you have. 
11. Download the file, unzip.
12. Find the tab in your router where you can select and choose that file to apply that version to your router.

Sadly, these downloads are not over SSL. HTTPS doesn't work on the download page. There is also no checksum to verify the contents of the file.

But hey, it fixed the problem for me.

Updates and patches are important for security, so keep an eye out for a newer version that lists this issue as fixed in the readme and continue to update at that point.

Surfboard cable modem won't work with router

When I plugged the router into a Surfboard SB6141 modem, no joy. 

If you don't want to read the story and want a simple fix go to the SUMMARY section at the end of the post. Hoping by sharing someone at Comcast might actually fix things so this process is a bit more seamless.

Why am I testing out the Surfboard SB6141 instead of using what Comcast provided? Been having random issues with their equipment, costs money to rent, plus read up on DOCSIS 3.0 on web sites like this:

and this:

Getting all this to work is another matter...calling Comcast is not something I look forward to on most days so doing as much research in advance as possible to minimize the time involved.

I read that when the modem has a device plugged in will be in bridged mode. This means the cable modem will pass the connection through to the connected device to interact with the ISP. Apparently the activation process with Comcast or other ISP ties the computer's MAC address to the ISP connection. The IP stores that MAC for 24 hours or longer. When you unplug the computer with the MAC address the ISP is expecting and plug in another device such as a router you will longer get your Internet connection.

>> update: however, I spoke to someone at Comcast that suggested they no longer use MAC addresses for cable. However he couldn't explain why plugging in the TRENDnet router would not work. If anyone can - technically - explain this to me give me a shout on Twitter. I want nerd level details.

There are presumably a few ways around this MAC issue if you are on a network with such an issue per what I read. I haven't tried any of this:

1. Wait 24 hours to see if the ISP picks up the new MAC Address :) but time is money and this sounds hokey. Not sure would solve the problem. Probably still have to call.

2. Configure the router to look like it is the device with the MAC the ISP is looking for. I don't like this option much personally.

3. Get the ISP to recognize the MAC address of the correct device. This seems like everyone is playing nicely and doing what is expected of them. Apparently easier said than done.

Specific examples -

I tried initially to plug in a cheap TRENDnet router just for fun. It was about $24 at Fry's. This didn't work at all. They suggest cloning the MAC address as noted in the following article:

How to make a TRENDnet router work clone MAC address of computer that registered during activation:

As a side note faking a MAC address shows you just how reliable MAC addresses are for authentication of a device (as in not at all).
Apparently Cisco routers have a similar issue when not using a Cisco cable modem. However they are more powerful and allow Administrators to program them to do more things. If you are not an admin or trying to become one this can be daunting and frustrating so not recommended for home users who don't enjoy geeking out. 
Cisco instructions to configure a non-Cisco cable modem:

But moving along, I unplugged the TRENDnet router and plugged in my NETGEAR wireless router I was using before I swapped out the cable modem. It worked without doing anything - but only for a wired connection from the machine I used to configure the modem to the NETGEAR device. Yes I plugged, unplugged, rebooted, released, renewed, reset, closed and opened. Nothing worked. So unless you only want one device on the network and no wireless this isn't very useful.

Ok now I'm on a mission. I'm going to try some different cable modems. Next up is a NETGEAR modem.

Once again I go through the activation routine. This time it forces me to create a brand new Xfinity account even though I created one yesterday setting up the Motorola Surfboard  modem. The credentials I created yesterday don't work either. Not to mention - I have a business class account and normally I login somewhere else. 

Anyway, only one hiccup in the activation: said it hit an error and had to unplug and re-plug in the modem. Then it said activation was complete. Said to close and re-open browser.

Ah, but no. New browser brings back activation page. Hmm. Let's try turning off modem for longer and a reboot.


Have to start over.


On second attempt I don't think I had to enter as much information. It got to the page that said all good. I hit next, modem restarted, no connection.

Close browser. Wait for all the lights to indicate the modem is happy. Open browser. No dice. Hit fix connection which resets the network adapter.


Ok now let's try a newer version of NETGEAR router. Old is a couple years old at least. That's ancient in tech years.

Plug NETGEAR router into NETGEAR modem. 

And. It. Works.

The speeds were actually slightly better than the Surfboard though not significantly.

Ok change password, set up wireless. That works too. Sort of.

iPhones have issues with current NETGEAR firmware but that's a topic for another post.

Also I guess will be forced to call Comcast since my static IP is not working. (See below for how that turned out).

So one option is don't buy a Motorola Surfboard or TRENDnet router if using Comcast. Or don't use Comcast. Get a decent NETGEAR modem and wireless router instead if you are looking for a simple option...except for the Apple issue I will explain later. And not if you have a fixed IP.

The Surfboard is getting a lot of hype but apparently it can't work to full potential on Comcast (if at all). Based on the Cox link above doesn't seem the Surfboard is the issue. 

--- Call to Comcast

I actually got someone good on the phone when I called Comcast who put up with all my questions and got me some answers. He couldn't answer everything but he did get info from someone who could explain the static IP issue and the "only approved equipment" and why the Surfboard only has one star on their web site.

You cannot have a static IP unless you use Comcast provided gear. This is because the gear from Comcast is altered and customized to interact with their systems. In the case of a fixed IP address there is certain configuration Comcast owns and manages to interact with their systems. It is proprietary and a security risk to expose this to customer owned devices where customers (and hackers) can do anything they want on that device. Hackers could use their box, or take over yours - and get into the Comcast network. Once in they can do damage to everyone.

If you don't believe all this mumbo jumbo I'd be happy to share a case study I did on the Target Breach. If you understand that you may understand why the security people at Comcast have such paranoia (which I share). I just wish they would explain things a bit better when customers call.

As for the Motorola Surfboard it is "compatible" but not "supported" by Comcast. That means it might work and should work but you have to call the vendor (Motorola) if you have problems. Hence the one star. And as noted won't work with a fixed IP.

A speed test at speeds I currently pay for showed no difference between Motorola, NETGEAR or Comcast provided gear.

If you really don't want Comcast gear and you don't need a fixed IP the Surfboard connected to my computer OK, but when I logged in can't change any settings. Routers I tried with the Surfboard had issues. Sounds like the MAC cloning may or may not be the problem with TRENDnet + Surfboard.

It was easier for me to set up with a couple hiccups was DOCSIS 3.0 gigabit modem + wireless router from NETGEAR.

I figured out the SMC from Comcast was not the issue for my Apple products. Suspected the wireless router but had to prove it.For someone I know and a free dinner!  :)

Looks like Comcast also has a router from Cisco that is DOCSIS 3.0, 8 channels down, 4 up. Looks pretty good. But their techs have to set it up if you want business class + fixed IP. Note that I was thinking of going for the DPC3008 to have a simple modem that wouldn't conflict with other devices but for a fixed IP they force you to get a wireless gateway device. You can ask them to set this up in bridge mode and turn off the wi-fi.

And while I was on the phone - they've been calling me to upgrade my service for same price. Got that done. Plugged back in the Comcast equipment.

Speed test: 56.95 down. 11.71 up.

So there you go. Getting the speed I paid for. Resolved issue with Apple iPads and IPhones not connecting to NETGEAR wireless router and had nothing to do with Comcast.

The only thing I want to check is quality of streaming video which was fuzzier than my friend's TV. Time for a TV test??

Which cable modem is right for you? Take your pick...just go for DOCSIS 3.0 with 8 channel down, 4 channel up if going very high speed and Comcast gear if you need fixed IP.

Update: 8/29/2014 --- Cisco cable modem / gateway

Tried out Cisco provided modem from Comcast Business class.

59.95 down
11.65 up

So pretty much the same as the same as the others but hopefully a Cisco device will play nicely with the other Cisco devices I am about to install.

I was able to ask the tech to turn off the two wireless networks and the wireless hot spot. I am not exactly sure why there is a separate wireless hot spot on top of the other wireless networks but don't want anything connecting to that device over wireless. You can login to that device and change the settings as needed if you want to turn off DHCP, turn on bridge mode, etc. I believe you can do the same with the SMC.

Installing Fedora in VMWare Fusion to run Git from Linux

Just FYI I'm leaning Mac, VMWare Fusion, ISOs, Linux distros, GIT (and security, Cisco network gear, AWS) all at the same time. Forgive me for not being an expert at everything. I do have a bit of experience (see for 20 year summary and for client list).

The point of this post is how time consuming and convoluted it can be to do really simple things - even with a lot of tech experience.

I decided to install a Linux VM to test running Git from it. Couldn't get it working on Windows 2012 in VM so gave up. I just wanted a Git repo to store some Cisco ASA config files. I never got to the Git part. It took me all the time I had just to get the Linux VM running.

There has got to be an easier and trusted way to handle software checksums - like SSL for web sites. 

Besides that the instructions for everything else on web sites I went to were far from obvious.

First the convoluted steps I took.

Followed by the short list.

1. Download Fedora - hopefully got the right version.
2. Says I need a thumb drive or CD to run. Find thumb drive. Hope has enough space.
3. Download checksum file to same directory
4. Run curl command to Fedora web site which does something
5. Try PGP command which fails because don't have PGP installed
6. Try to install PGP. Apparently bought by Symantec now.
7. Read about - cool.
8. Login at Symantec (recover password)
9. Download and install PGP
10. Realize command on Fedora site is is gpg, not pgp
11. Visit seeking free version of GNU privacy guard
12. Search and find downloads on various web sites. seems to be official.
13. Get the Mac version. More checksums to verify that.
14. Is meant for email. Wants my address book. No.
15. Command line verification doesn't work with Mac version as written. This is a pain in the ass.
16. Tried the SHA checksum and that didn't work either. Maybe I'll write my own checksum software.
17. Well I'm putting this on a disk to run in a VM so let's skip to that for now. In theory in VMWare won't have access to the host. (not necessarily recommended but I will follow up on this later). 
18. Turns out on Mac you can just right click and burn to disk. Ok forget the USB drive. Go find disk and external drive...(oh yeah there's my Windows 2012 CD in my CD drive...remove and replace with new, blank disc.)
19. Right click on downloaded file, burn, enter some stuff. Wait.
20. Open VMWare Fusion, choose file, new, cd just created, Linux, Fedora, 64 bit, finish, save
21. No worky. Says it can't find boot files.
22. Aparently I downloaded live image which won't work in VM - intended to run from CD. Perhaps requires Internet access. Not what I wanted. Bah. Or maybe I did something else wrong. Search for a different version.
23. Oh...there are the USB instructions. Huh. Later.
24. Let's try downloading the DVD version which creates an ISO and actually installs the OS.
25. Get another blank disk.
26. Burn, the web for Linux training that includes overview of distros, LiveCDs and VMs (already took one security class but that assumes you have the OS installed...)
27. Dang. OS not found. Mouse stopped working. Hosed. 
28. Hard reboot. What? Download not finished. I guess I had multiple downloads going on clicking around the Fedora web site trying to find DVD version.
29. Move all downloads to trash and re-download half finished DVD version. Watch the download bar to make sure it completes this time.
30. Downloads won't complete. Restart machine.
31. Figure out downloads hung up in Safari. Stop all but the one I want.
32. Download working again...wait for full download.
33. Install Google Chrome - going to see if downloads work better just in case Safari hangs again. While I wait.
34. Burn downloaded file to disc. Do some other stuff - 
35. Finally. Try to start VM with instructions above again. Same error when I try to run the VM.
36. FIddle Fiddle Fiddle.
37. Search the web some more and find instructions for old version of Fusion and different version of Linux. No dice. Menus changed. 
38. Finally instructions that indicate creating the vm without installing the OS and going to menus I don't have. Hmm. Clues.
39. Instead of trying to play the vm I click around on all the icons at the top of the vm in fusion and finally see "choose a disk or disk image" when I click the cd drive looking icon.
40. AHA! I choose the ISO file I burned to the CD. Now it looks like it's installing something. Go through steps.
41. On the installation summary page I have to click on installation destination and choose OK. Not sure that's the best option but allows me to click begin installation.
42. Once I got his far following the prompts seemed to work.

Ok all that could boil down to a couple steps if could simplify the checksum:

1. Go to
2. Click download
3. Click on Formats in sub menu
4. Click on 64bit DVD option (unless you have 32 bit older machine of course).
5. Download starts immediately.
6. [insert simpler check sum instructions here]
7. Right click (on downloaded file on Mac) and burn to blank CD
8. Start Fusion. 
9. File, new, choose appropriate Linux options
10. On the top of vm screen click the icon with cd coming out of drive.*
11. Choose disk option and select the file burned to CD
12. Follow prompts to install Fedora on the VM

* If your VM doesn't see the CD Rom drive have to stop the VM and associate the drive with the VM while it is stopped. Then play the VM and choose the file on CD as noted above.

--- epilogue --- 
So I went to dinner with a friend and he was chastising me for not looking on YouTube. I explained that I kept thinking I had it figured out but kept getting foiled.

So anyway I looked on YouTube and saw videos but they didn't encompass the end to end process of downloading Fedora and then getting that download into a format (ISO on a disk) that would work in a VM. 

Maybe it's out there but would take some more searching. I'm done :) maybe someone will create one.

VLANs vs Subnets

VLAN vs Subnet


Do you want to restrict traffic at layer 2 (switch - VLAN) or layer 3 (firewall - subnet) or both?

Do you need to cut down on broadcast noise (VLAN)?

How much overhead do you want to manage? 

Most VLANs are tied to one subnet so you typically will see subnets with out VLANs but not the opposite.

So if, for example, you want to set up a guest portion of your network and an internal portion for a SOHO you ca set up subnet 1 on firewall and VLAN 1 on switch that only works on subnet 1 for guests. Repeat for subnet 2 and VLAN 2 for your internal network.

You could have a DMZ hanging off the firewall that isn't behind the switch or in any VLAN.

A shared printer could hang off the switch not in any subnet or VLAN. (Just one option ...if you want to share thee printer between subnets.

Typically you'd have device connected to net, then firewall, then switch if separate devices.

A larger company might have more firewalls between different network segments.

If an APT (hacker) can get onto a device that has permissions in different subnets and VLANs, you're not really segregated.


Related Links:

Extensive Q & A - says VLANs more secure because not based on IP.

VLANS and Subnets - 10 things you need to know

VLAN vs Subnet - says VLAN can be hacked but does not expound. Includes configuration.

Interesting discussion of segmenting traffic at Layer 2 and 3 - pros and cons of VLANs vs. subnets. Understand your network traffic.

Java JIT Optimization

Write short methods for inlining.

Avoid polymorphic dispatch on same call site - putting different types in a list and call a method.

Keep small objects local to aid in escape analysis

Use intrinsic methods. (putLong, etc.)

Inspect inlining - PrintInlining, PrintAssembly (yeah...we all know assembly....)

Secure Java Programming

Notes for secure Java programming

Normalization - convert all data to common format before checking input since the same character can be represented by different codes in different languages and character sets.

Code injection - injecting commands into exec statements in Java, for example.

Identification - who user is

Authentication - verify person is who they say they are. Make sure you are using a reputable, solid JAAS implementation

Authorization - verify user is allowed to perform selected action. Many options: again make sure solid source and reputable vendor, well tested over time.

Output encoding - let everything come into your app, then validate and make sure data cannot be executed as code when submitted to any other process. Encode special characters for the output context.

Blacklisting - many ways to bypass. Not best approach. Characters in different languages, character sets.

Whitelist - only accept known good characters. Hard to do because can break functionality - such as when checking names or passwords that may have special characters.

ReDoS - regular expression denial of service attack.

- Deterministic regular expression engines try to match one time

- Non-Deterministic regular expression engines try multiple times - can craft inputs that take systems down. Beware of repeating groups and make characters required if possible.

Use parameterized queries wherever possible to prevent SQL injection

Can use Apache XML encoding library to encode XML data, for example

Jdgui - Java decompiler

JavaSnoop - debug Java code without the source code. Defeats obfuscation tools:

Consider implementing sensitive code in C/C++ and call using JNI. Allows advanced obsfucation, anti-debug, anti-tamper tactics.

Obsfucation and compiling to native code (Excelsior JET, GCJ -- may not be production ready) can make it take longer to get source.

Don't trust client. Even if behind firewall. Some sources report over 50% of breaches are by accidental or intentional insiders. Phishing and social engineering can cause user machine to be hacked and used by an attacker.

Validate XML against XSD (javax.xml.validation.Validator) or DTD (javax.xml.validation.Validator

XML injection - alters XML to invalid format.

XPath injection - XPath concatenation can allow querying additional information. No such thing as parameterized queries for XPath.

External entity attack. Use of word SYSTEM goes external file doctype, entity which can include malicious entity replacements, try to read system files or create DOS attacks.

XML entity expansion attack. Recursive entity replacement to create huge payload what parsed even though initial input is small.

To prevent XML attacks can configure XML parser to disallow vulerable features.  Disallow doctype declarations, turn on FEATURE_SECURE_PROGRAMMING. If you require external doctype declarations write a custom entity resolver - this is complicated. Static analysis tools may check for this.

Path Manipulation Attack -unauthorized access to files by breaking out of current path or accessing a file via absolute path. To prevent verify you are in expected directory before proceeding.

Use Java Security manager to limit what code can do. Run your application with the security.manager flag and specify policy file.

Temp files may be pre-created with malicious content. On Unix systems create a symbolic link. Java now has ability to set file permissions.

java.util.random - predictable

Java 7 introduced several new methods in the java.nio.file.Files can create file names with unpredictable names. Also will not create a file if file with that name. Also has shutdown hook to delete temp files.

In Java 7 try statement can have objects that implement closeable interface which will be automatically closed on end of try.

Java 7 has a multi catch block to handle multiple types of exceptions.

Race conditions - timing issues while accessing resources causes data to be invalid.

TOCTOU - check a property on a resource then someone changes that resource before the resource is used.

Java Files API has TOC/TOU issues. Checks file properties, then properties could be changed before the file is used because the files API works on file paths. Using the File object does not have this vulnerability because it gets a handle on the actual file.

Deadlocks can cause DoS attack.

Encryption problems: skipping cert checks, no hostname check, no certificate validation, no certain in truststore

HTTPSUrlConnection in Java does this for you - other libraries do not

Chef, Ansible, Puppet, Salt

Articles comparing Chef, Puppet, Ansible, Salt


Usage Stats

Ansible beats salt on security

Ansible vs Puppet, Chef

A search of the Mitre cve database shows some pretty substantial vulnerabilities in salt, most in Puppet (but is most widely used and been out longer), least for Ansible:

Ok after all that I lean towards Ansible but need to try it. I like the idea of using a language popular with says admins vs. a customized language. The model of agent-less appeals more from a security and administration standpoint. Agents can't be hacked if not there. Push vs. pull can get changes out more quickly. This - having not yet used the tool. But I also know the AWS kids at Amazon use it and love it.

Here are some interesting ideas to try:

Fixing HeartBleed with Ansible

Secure MySQL with Ansible

Ansible SSH security considerations

As noted in previous posts I am interested in storage of keys separate from data as number one problem with encryption in companies today. Earlier this year Ansible added a vault feature. Will be interesting to see how this works and if facilitates this separation.

Web Security Vulnerabilities

Same Origin Policy: ability for web browser to restrict scripts from accessing DOM properties and other methods of another site.

JSONP - opens up a lot of risks. Recommend not using.

DOM based cross site scripting

Cross domain messaging

Stealing data from web storage

Risks introduced by HTML5 elements and attributes (Video, Auduo, Canvas, Geolocation)

Architectural Flaws, Implementation Bugs

Can run into buffer overflow with unmanaged code in C#.

XSS - JavaScript can be injected by many different tags: video, script, etc. Insert JavaScript into form, URL or submit malformed request straight to server to direct response data to an alternate site to steal data. If can get a user to login to a malicious page can steal credentials and session IDs.

Input encoding - attempts to block certain characters with white lists, black lists, exact match, semantic rules.

Output Encoding - may be preferable to input validation. This tactic allows entering any character but encoding problematic characters so they won't be interpreted as executable code. There are common encoding libraries but some are not suitable for production. 

Output encoding should be done for any user or 3rd party input in HTML, CSS, JavaScript, URL, etc.

SQL injection: insert SQL into web inputs to run arbitrary SQL code against web database. First step is to insert a single quote. Is site is vulnerable will throw an error. Check version, etc. To get database type. Then query system tables, columns. Then execute random SQL. Not always that simple but that's the gist of it.

Session Vulnerabilities:

Session Fixation: change session ID after login

Session Prediction



Eavesdropping: Fireshoot plugin - keep session URLs on SSL.

Cross Site Request Forgery - making a request to another site, which is different than XSS which injects code into a request. Example - loading an image from another site would include information in cookies from the image site which could be used be the site including the image link. So for example, if someone is logged into a site and an email is sent including an image but the image includes a malicious command, the cookies are included when the user views the image and will allow the malicious action to occur. To prevent: #1 have to prevent XSS. #2. Tokens per session, page, form. Not in a cookie, tied to session.

When using an iFrame can set sandbox properties so no code in the iFrame can affect the page embedding the iFrame.

Client side validation can help reduce load on the server but should never be used for validation because client side validation can be bypassed by web site visitors.

TamperData Firefox plug in alters web request submissions.

Set autocomplete = off in input fields.
For Web Storage introduced in HTML5 store sensitive data in session instead of local so is not persisted.

Indirect Reference Map - map fake data to real data and only send fake data to the browser and map it back to the real data for server processing.

LSASS system service runs on Windows. .Net apps can use it to encrypt values and only use the encrypted values in memory.

Wireless Access Points, PEAP and Radius Servers

Started looking up what it takes to use PEAP with wireless access point.

There are a bunch of parts and pieces need to put together...

RADIUS protocol ... RADIUS service on a server to auth.

Note that is you use an EAP solution that incorporates a vulnerable version of SSL you will probably be subject to HeartBleed attack.

ARP cache entries - view, modify, secure

The following links go to commands to view and modify ARP cache on a machine. In order to prevent cache poisoning you might want to prevent gratuitous ARP by forcing MAC addresses for various machines.





Guard against gratuitous arp vulnerabilities for VOIP phones

Cisco document with details about gratuitous arp

Decoding IP Header - Example

Let's take a sample IP packet header and see what's in it. Here's our sample random IP header pulled out of WireShark traffic:

45 20 01 b4 96 25 40 00 39 06 60 6a 5d b8 d7 c8 0a 01 0a 13

A packet is between 20-60 bytes and a length greater than 20 means we have options. So how long is this packet?

Each hexadecimal character is four bits and 8 bits = a byte, so every two characters is one byte.So let's count the bytes:

45 20 01 b4 96 25 40 00 39 06 60 6a 5d b8 d7 c8 0a 01 0a 13

Ok looks like we have a 20 byte header so there are no options.

We'll need a couple things for our translation -- the cheat sheets in my last post to convert hex to binary and decimal:

Also the layout for the IPv4 header in this post which tells us the purpose of the hex values in the various positions:

Byte 1 (45)

The first two numbers are always the version and header length.

4 in hex = version 4  (IPv4) which is the default version.

5 is the length. 5 in hex = 5 in decimal and it's a quad number so we multiply by 4 to get our length = 20 (confirms our analysis above).

Byte 2 (20)

Byte 2 is 20. This is Type of Service. Going to skip this one for now as most routers ignore

Bytes 3-4 (01 b4) 

This is the datagram length (header + payload)

So the binary version of this, using our cheat sheet in prior post is:

0 0 0 0 0 0 0 1 1 0 1 1 0 1 0 0

We've got 1's in positions: 2, 4, 5, 7, 8

We grab the decimal values for these and add them up:

4 + 16 + 32 + 128 + 256 = 436

Yep, that matches up with Wireshark so cool.

Bytes 5-6 (96 25)

This is our unique id - it should be a random number so not going to bother translating this one righht now. Might be important if you want to verify randomness.

Next 4 bits - flags (4)

We need to turn this value into 4 bits to determine our flags.

Binary version of 4 is:

0 1 0 0

That means we have one flag set - 1 indicates datagram may be fragmented, however the next bit indicates no more fragments exist.

Next 12 bits (0 0 0)

This is our fragment offset. Although the packet says it may be fragmented, the flag to indicate no more fragments was set as noted and the fragment offset of this packet is 0 so seems like there is only one packet.

Next byte (39)

Next byte is time to live.  39 in hex translated to binary:

0 0 1 1 1 0 0 1

We've got values in positions: 0, 3, 4, 5 - grab the decimal values:

1 +  8 + 16 + 32 = 57

Cool - matches Wireshark again.

1 byte for the protocol (06)

Translate to binary

0 0 0 0 0 1 1 0

Translate to decimal - positions 1, 2

2 + 4 = 6

Take a look at our nifty protocol chart:


Looks like we have TCP (#6).

Next 2 bytes (60 6a)

This is our checksum. Equipment uses this to verify nothing has inadvertently changed.

4 bytes  (5d b8 d7 c8)

Source address

We need to figure out if we have a Class A, B or C IP address to know which bytes refer to network and which bytes refer to host in the address.

A - one byte for network, three bytes for host
B - two bytes for network, two bytes for host
C - three bytes for network one byte for host

Look at first number to determine if class A, B or C:

1-127 = A
128-191 = B
192-223 = C

Each byte is part of address with dot (.) in between (dotted notation)

5d = 0 1 0 1 1 1 0 1 = positions = 0, 2, 3, 4, 6 = 1 + 4 + 8 + 16 + 64 = 93
b8 = 1 0 1 1 1 0 0 0 = positions = 3, 4, 5, 7 = 8 + 16 + 32 + 128 = 184
d7 = 1 1 0 1 0 1 1 1 = positions = 0, 1, 2, 4, 6, 7 = 1 + 2 + 4 + 16 + 64 + 128 = 215
c8 = 1 1 0 0 1 0 0 0 = positions = 3, 6, 7 = 8 + 64 + 128 = 200

So we have a class A address (93).

Address is

We can look that up at's a RIPE address?? Not sure why a computer on my network is connecting to a European address...but that's a topic for

inetnum: -
netname:         EDGECAST-NETBLK-03
descr:           NETBLK-03-EU-93-184-212-0-22
country:         EU
admin-c:         DS7892-RIPE
tech-c:          DS7892-RIPE
status:          ASSIGNED PA
mnt-by:          MNT-EDGECAST
source:          RIPE # Filtered

4 bytes (0a 01 0a 13)

Destination address

Same concept as above.

Hexadecimal to Binary to Decimal - Cheat Sheet

I'm studying hexadecimal to decimal conversions for packet header analysis (IP, TCP, UDP, etc).

Trying to come up with a cheat sheet to make the whole thing easier to remember.

First of all each numbering system has a single character representing each possible single digit value. After these values are used up you start tacking these single digits together to come up with bigger values.

For example the single digit values for each of the following numbering systems are:

Binary = base 2 = 0, 1
Decimal = base 10 = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Hexadecimal = base 16 = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

In the list of characters above for hexadecimal the letters are just a way to use a single character for a two digit decimal number. So our 16 base numbers are 0-15 and we use letters for 10-15 (which are 2 digit numbers) as follows:

A = 10
B = 11
C = 12
D = 13
E = 14
F = 15

So why the heck are we making this all complicated and using these crazy numbering schemes instead of the decimal numbering system we know and love? Computers need a way to store and represent numbers. They don't have fingers. They have circuits. (I'm probably way over-simplifying this - intentionally) These [Boolean] circuits can either be on or off. I like to think of it as a row of light switches. Flip some of them on, some of them off. On is represented as 1 and off is represented as 0.

So let's say you had a row of 4 light switches and starting from right to left, first is off, second is on, third is on, fourth is off. That would look like:

0 1 1 0

The light switches on or off allow you to represent a binary number. So what's that binary number in decimal? Binary is base 2. For each position that has a 1, we take the value of the position (starting with position 0) to the power of 2 and add up the results to get the decimal number. So we have in this case positions: 3, 2, 1, 0. Position 1 and position 2 have a value, position 0 and position 3 don't so:

(0) + (22) + (21) + (0) = 0 + 4 + 2 + 0 = 6

So 0 1 1 0 in binary (or circuits in a computer turned on and off) = 6 in decimal

So what's hexadecimal for anyway? It takes less space to represent a hexadecimal number where a single hexadecimal digit can represent four binary digits. In other words instead of representing 15 as 1 1 1 1 we can just use F. Hexadecimal is used instead of decimal because 10 is not a value that comes from 2x, so it's not easy to translate a series of 1's and 0's to base 10.

In computer terminology each single digit of storage (circuit on or off, i.e. 1 or 0) is a called a "bit". 8 bits = a "byte". 4 bits = half a byte or a"nibble". (har har)

One hexadecimal character is 4 bits (with a 1 or a 0 in each spot). If you think about it, it makes sense. Turn all four bits on (1) and calculate the decimal number:

1 1 1 1

or: (23 = 8) + (22 = 4) + (21 = 2) + (20 = 1) = 15 (counting from 0 to 15 = 16 digits).

We can turn that single four digit binary number into a 1 digit hexadecimal number and store 1 digit instead of 4

Ok now we want to take a hexadecimal digit and convert it to decimal. So let's take 6, for example.

We'll have four bits to represent 6.

_ _ _ _

OK so for each of those spots we have to either put in a 1 or a 0 as required to represent a 6. If each of those slots represents a binary value and if each spot were filled with a 1 we'd have these decimal values for each corresponding position (again 20, 21, etc.):

8 4 2 1

Ok so how do we come up with 6? 4 + 2. So the slots for 4 and 2 are set to 1 and the slot for 1 and 8 are set to 0. That gives us binary 6:

0 1 1 0

Let's try D. D in hexadecimal = 13 in decimal as shown above. We will need the slots to have 1 for positions 4, 3 and 1 (8 + 4 + 1) so binary digit D is represented as:

1 1 0 1

Now we can look at this another way to come up with our cheat sheet. We know the decimal value of each hexadecimal digit above. We can map out the binary to hex translation in a table like this:

Hex Binary
00 0 0 0
10 0 0 1
20 0 1 0
30 0 1 1
40 1 0 0
50 1 0 1
60 1 1 0
70 1 1 1
81 0 0 0
91 0 0 1
A (10)1 0 1 0
B (11)1 0 1 1
C (12)1 1 0 0
D (13)1 1 0 1
E (14)1 1 1 0
F (15)1 1 1 1

It will also be helpful to memorize or table-ize the values of each binary position for our translations from hex to decimal. For each position in a binary number there is a corresponding decimal number which is (2^[position]) or 2position. We know already that position 0 = 1 (20), position 1 = 2 (21), position 2 = 4 (22) and position 3 = 8 (23). Our full table for 16 positions could look like this where each subsequent value doubles the value in the prior position:


Ok now let's say we have some crazy looking Hexadecimal number that looks like this: 


First of all we know there are four bits for each hex digit:

_ _ _ _ | _ _ _ _ | _ _ _ _ | _ _ _ _

Now as above we know that for each four slots we'll have 8 6 4 2 as the decimal representation of 1 1 1 1. So let's translate that crazy hex number into binary one character at a time.

A = 10 as shown above and in our chart we see that is 1 0 1 0.
E = 14 and that is 1 1 1 0
0 = 0 and that is 0 0 0 0
6 = 6 and that is 0 1 1 0

Put that all together and what does it spell?! Ok I'll get out of cheerleader mode now.

1 0 1 0 1 1 1 0 0 0 0 0 0 1 1 0

What do we do with that?? Well we know that it's base 2 so for each digit we calculate 2position and add up the result. So we have a 1 in positions (starting with position 0 on the right): 1, 2, 9, 10, 11, 13, 15.

We can grab the decimal value for each of those binary digit positions from the above binary position to decimal table and add them up:

2 + 4 + 512 + 1024 + 2048 + 8192 + 32768 = 44550

We can check that calculation on the handy dandy Windows calculator. Open it up and choose "programmer" from the view menu. Click on the "hex" radio button. Enter AE06. Then click on the decimal radio button. Yay it worked! I'm typing this all up from scratch and I guess I got it figured out.

Hopefully having the translation cheat sheets above will help in a pinch, or can go the route of memorizing all of the above - kind of like my parents used to grill into me and their grade school students to learn their math facts :)

Related - Translating IP headers (and UDP, TCP, etc. not mentioned in the post below) from Hex to meaningful values humans can understand - I'm assuming here most people don't speak hex.

Key Management Systems & Cloud Encryption

Listening to SANS cryptography session. I've always wondered why there is so much focus on secure code for encryption but not a lot of discussion about key management. I've blogged and tweeted about this mystery in the past and caveat about the first link - I'm digging into cryptography a bit more right now and may be revising:

Data needs to be protected at rest, in transit and protect the key. If you fail at any one of these your encryption is useless. A lot of people understand the first two. The third is often overlooked and possibly most important, because a short key length still takes some time to hack, but keys out in the open mean you might as well hand the data to the adversary.

Oh and about the "well it's on our internal network behind the firewall" argument, I hope anyone involved in a corporation of any size that is utilizing pen testing services and/or has been breached understands by now this is a completely na├»ve viewpoint. I used to have to argue with network admins at managed data centers who didn't want to set my outbound traffic firewall rules. Now outbound traffic is one of the primary ways for determining if you're hacked since all malware calls home. APTs have attackers infiltrating systems throughout corporate networks. If they get access to your data you want to have the keys that decrypt that data in a separate place.

So now that we understand that even if our key is in our own house we need to separate it from the data and protect it from people who might get access to the encrypted data, how and where do we store and manage the keys?

We need to store them away from the data, make them accessible to the applications that need to decrypt the data, protect the data in transit and rest (and in memory for things like credit cards on POS machines...)

There are conceptual discussions of protecting the key, and I understand you should put your key on a separate server, away from your data. But what is the best way to actually implement this solution?  What about protecting keys in conjunction with using a cloud provider where you want to protect your keys and have them completely managed by someone other than the cloud provider so anyone that gets access to your encrypted data in the cloud cannot access the keys?

One of the quandaries I have with using these vendors is the Trojan horse concept. If you're keys are the absolute most critical thing you need to protect because they provide access to all your data, giving your keys to a third party system is a bit scary.

One thing would be to make sure very limited amount of people have access to the keys. Additionally through separation of duties you can make sure the same people who manage the keys do not have access to the encrypted data. But how do you actually implement that?

I remember from a business school class a company that outsourced production of certain products to China. Their approach was to have different companies to produce different parts with no single company having access to the complete formula. Perhaps such an approach would work with key management, similar to multi-factor authentication. In order to decrypt you need multiple pieces of information. Of course this adds overhead to processing and slows things down.

The other issue is actually limiting access. After my first graduate degree in software engineering in 2000 I went out to look at technology for a couple of venture capital firms. I didn't have much security knowledge at that point but I remember going to look at some technology from a company one of the VCs was considering for investment. A Russian guy proudly explained how they were able to bypass corporate firewalls by tunneling through SSL ports. This freaked me out somewhat. If these guys can tunnel through an SSL channel which is supposed to be for SSL and completely bypass the firewall, it made the firewall kind of useless. I didn't understand all the ins and outs of networks, ports, packets, etc. at that point but I thought: what's the point of blocking all these ports if someone can use any old port to get into your network?

So clearly blocking access to your keys by just adding firewall rules for ports is not enough. You'll need authentication to limiting access to only specific IP addresses, authorized users and servers and you have to make sure no one can get into the servers, IP addresses or insert a man in the middle attack because then they can tunnel through to your key management store.

Obviously a number of layers of security are needed at different levels to protect your keys and make sure the only applications that should access those keys can get to them to decrypt only the data to which they should have access. There are complications with unencrypted data in memory as well. Thinking about the way SSL VPNs work - they download a thin client and everything runs in an encrypted RAM drive. Maybe something like that could be used for corporate applications running in the cloud.

Going to great lengths to encrypt every piece of data and protect it in every possible way can get very expensive and slow down system performance. Perhaps a better approach is to limit the risk of exposure to a reasonable degree, and add additional layers of detecting any malicious or unauthorized activity. Detection in this day and age of APTs and complexity which can be mind boggling at times, I would argue, is more important than prevention and have mentioned that in my own company in terms of auditing financial systems for data errors. This viewpoint was confirmed in SANS 401 class I took which has the motto: "Prevention is Ideal, detection is a must". So perhaps you limit your exposure and add a lot of auditing and alerts for unexpected activity.

I'm currently listening to Diffe-Hellman key exchange from SANS 401 ( which operates under the concept of asymmetric encryption and being able to do key exchange in the presence of an adversary. Being able to utilize vendor systems that can provide amazing value in terms of innovation, reduced time to market, segregation, fault tolerance, scalabilty and performance - without - a capital expenditure while minimizing the risk of loss of intellectual property, NPI data and credit cards at the same time is an interesting problem.

For the moment I'll be going through this list of vendors that have key management systems (from Wikipedia) and reading a few books on the matter ... to be continued (I know the suspense is killing you).

Maybe we can get a panel from these vendors to do a presentation at an upcoming Seattle AWS Architects and Engineers meet up:


Gartner says Amazon has 5 times amount of compute power than next 18 cloud providers combined.

Pace of innovation increases when increase deployment iterations and reduce the risk.

AWS builds custom servers. Optimized performance and 30% off the cost compared to private cloud purchasing servers from vendors.

DynamoDB gives you NoSQL with consistent (as opposed to eventually consistent) reads.

Because Amazon has built so many data centers they are obtaining expertise and getting better at it.

The success of Amazon is based on a big way on a distibuted model of teams who manage their own technology [and interact through services based through a blog post I read].

Scaling SQL databases is easy - partition the database. The problem is repartitioning the data while taking on new traffic. Initially Amazon avoided by buying bigger boxes


Amazon wrote a paper on Amazon Dynamo, highly available key-value store.

Distributed hash table.

Trade-off consistency for availability.

Allowed scalability while taking live traffic.

Scaling was easier but still required developers to benchmark new boxes, install software, wear pagers, etc.

Was a library, not a service.


DynamoDB: a service.

- durability and scalability
- scale is handled (specify requests per second)
- easy to use
- low latency
- fault tolerant design (things fail - plan for it)

At Amazon when talk about durability and scalability always go after three points of failure for redundancy.

Quorum in distibuted systems

DynamoDB handles different scenarios of replica failures so developers can focus on the application.

SimpleDB has max 10 GB and customer has to manage their own permissions.

Design for minimal payload, maximum throughput.

Can run map reduce jobs through DynamoDB. EMR gives Hive on top of DynamoDB.

Many AWS videos and re:invent sessions on AWS web site.

HasOffers uses DynamoDB for tracking sessions, deduplication.

Session tracking is perfect for NoSQL because look everything up by a single key: session id.

Deduplication: event deduplication.

Fixing DynamoDB problems ... Double capacity, maybe twice, fix the problem, drop the capacity

Being asynchronous and using queues is nice option.

Relational databases are more flexible for querying. Something to consider when determining whether you want to use RDBMS or NoSQL.

Hash key is single key. Can also have combo key.

Hash key = distribution key

Optimal design = large number of unique hash keys + uniform distribution across hash keys.

Important to pick hash key with large cardinality.

Range key: composite primary key - 1:N relationships. Optional range condition. Like == < > >= <=

e.g.customer id is hash key and range key is photo id

Local secondary indexes. e.g. Two customers share a key. Requires more throughput capacity.

Hash + Range must be unique

Data types supported: string, number, binary and sets of the three.

Cannot add or change secondary indexes after initial creation of table...may be coming.

Global secondary indexes are separate tables asynchronously updated on your behalf. GSI lookup is eventually consistent. May require one or more updates.

Local secondary index = max 10 GB per hash key. May be a reason to move to GSI.

GSI has it's own provisioned reads and writes whereas LSI's use provisioned table reads and writes. 

1-1 relationship: hash key and secondary index

1-Many index: hash key and range key

NoSQL - no transaction support in DynamoDB

Can only double throughput when changing. Amazon looking at changing this.

Choosing the right data store:

SQL: structured data, complex queries, transactions.

NoSQL: unstructured data, easier scaling

DataPipeline automates moving between data stores.

A client only app is available which emulates DynamoDB to develop without paying AWS fees.

OSI and TCP Model - Network Layers

Studying for GAIC and just seeing if I can write these from memory.

We use the OSI model to talk about network layers and the TCP/IP model to implement.

OSI Model

(P)lease (D)o (N)ot (T)hrow (S)ausage (P)izza (A)way

Physical layer (layer 1) - transmission of raw binary data (0's and 1's). Typically via electrical (Ethernet), Radio Frequency (Wireless) or photo optics (Fiber).

Data Link Layer (layer 2) - Switches typically operate at this layer. This is the logical layer - where the data has meaning as opposed to raw binary data.

Network Layer (layer 3) - routing layer - where most routers operate and determine the path the data will take through the network. Some switches, referred to routing switches, operate at this layer.

Transport Layer (Layer 4)  - packages and orders data as it flows through the network.

Session Layer (Layer 5) - virtual connection between two points for transmission of data.

Presentation Layer (Layer 6) - transforms the data into machine independent data that can be read by any computer, whether big endian (left to right) or little endian (right to left).

Application Layer (Layer 7) - the layer that handles providing particular needed network services to the application (HTTP, FTP, etc.)

The TCP/IP Model

The TCP/IP Model has four layers but some layers are just a combination of the above layers. There are still 7 layers we just group them together in the TCP/IP model as follows:

Network - Layers 1 and 2
Internet - Layer 3
Transport - Layer 4
Application - Layer 5, 6, 7

Devices & Tools

NICs operate in Layer 1 and Layer 2, handling transmission of binary data via ethernet, token ring, wireless.

Sniffers operate at layer 2.

Switches natively operate at layer 2 though some have layer 3 routing capabilities and blade systems may allow for firewall.

Routers operate at layer 3. They use the IP to determine which network to go next but use ARP, routing tables and MAC addresses to get the packet from one hop to the next.

Firewalls operate at layer 3 or layer 4.


Not too shabby. Didn't have to look anything up :)

On to header analysis and protocols.