Quantcast
Channel: PenguinDreams
Viewing all 43 articles
Browse latest View live

Storing Shibboleth IDP Logs in a Database with IP Addresses

$
0
0

Shibboleth’s IDP can store audit logs that indicate when people authenticate against the IDP web application. These files are written to disk by default using the settings in the logging.xml configuration file. This tutorial will show how audit logs can be placed in a MS SQL database and also include the IP addresses of the connecting clients.

Shibboleth uses the SLF4J, a logging library that is a front end for a variety of backend loggers. By default, it uses logback to handle process audit logs. SLF4J supports a DBAppender. Unlike the RollingFileAppender, the DBAppender doesn’t support encoder/Pattern configuration. It places all logging messages in a standard database schema which can be found in the logback-0.9.28\logback-classic\src\main\java\ch\qos\logback\classic\db\dialect directory of the logback source code.

Note that in version 0.9.28 of logback, there is a bug in the MS SQL schema where all the event_id fields have an invalid type of DECIMAL(40). These must be changed to DECIMAL(38). This tutorial assume the use of an MS SQL database. Other databases will work, but the triggers and schema will need to be adjusted for those dialects accordingly.

In the Shibboleth logging.xml configuration file, start by adding the following appender:

    <appender name="IDP_DB_APPENDER" class="ch.qos.logback.classic.db.DBAppender">
      <connectionSource class="ch.qos.logback.core.db.DataSourceConnectionSource">
        <dataSource class="com.jolbox.bonecp.BoneCPDataSource">
          <driverClass>com.microsoft.sqlserver.jdbc.SQLServerDriver</driverClass>
          <jdbcUrl>jdbc:sqlserver://dbserver.example.edu:5555;databaseName=ShibAudit</jdbcUrl>
          <username>someUsername</username>
          <password>somePassword</password>
        </dataSource>
      </connectionSource>
    </appender>

Replace the database name, server, usernames and passwords respectively.

For the above example, the BoneCP connectionSource is used for connection pooling. The BoneCP libraries will be available if uApprove has been installed. Additional configuration options can be specified for tweaking connection counts and partitions within the pool.

An alternative to BoneCP is the c3p0 connection pool library. This library comes with Shibboleth and requires no extra jar files:

  <appender name="IDP_DB_APPENDER" class="ch.qos.logback.classic.db.DBAppender">
    <connectionSource class="ch.qos.logback.core.db.DataSourceConnectionSource">
      <dataSource class="com.mchange.v2.c3p0.ComboPooledDataSource">
        <driverClass>com.microsoft.sqlserver.jdbc.SQLServerDriver</driverClass>
        <jdbcUrl>jdbc:sqlserver://dbserver.example.edu:5555;databaseName=ShibAudit</jdbcUrl>
        <user>someUser</user>
        <password>somePassword</password>
      </dataSource>
    </connectionSource>
  </appender>

Again, replace the database name, server, usernames and passwords respectively. Next, the new appender needs to be attached to the audit logger:

    <logger name="Shibboleth-Audit" level="ALL">
        <appender-ref ref="IDP_AUDIT" />
        <appender-ref ref="IDP_DB_APPENDER" />
    </logger>

At this point, Shibboleth IDP can be restarted and attempts to authenticate with the IDP should result in log entries in the database. Make sure this is working. If not, check the idp-process.log as well as Tomcat’s cataline.out log file to determine if there were errors creating the DBAppender object.

Shibboleth 2.2.1 places its audit logs in a pipe delimited format that can easily be parsed. Rather than let the logger store this long pipe delimited string in our database, it would be beneficial to convert these logs into structure that’s easier to query. Start by creating the following table in the same database the DBAppender writes to:

CREATE TABLE audit_log (
   logtime datetime,
   remoteAddr VARCHAR(255),  
   requestbinding VARCHAR(100),
   requestId VARCHAR(50),
   relayingPartyId VARCHAR(255),
   messageProfileId VARCHAR(100),
   assertingPartyId VARCHAR(255),
   responseBinding VARCHAR(100),
   responseId VARCHAR(50),
   principalName VARCHAR(8),
   authNMethod VARCHAR(100),
   releasedAttributeIds VARCHAR(255),
   nameIdentifier VARCHAR(100),
   assertionIDs VARCHAR(100)
);

Now, a trigger will be added to the logging_event table so that whenever a row is inserted, it will be parsed and placed into the new tables.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE TRIGGER [dbo].[parseLog]
   ON  [dbo].[logging_event]
   AFTER INSERT
AS 
BEGIN

	SET NOCOUNT ON;

  DECLARE @data VARCHAR(4000)
  DECLARE @xml XML
  DECLARE @unixTimestamp datetime
  
  --Convert UNIX timestamp, with ms, to mssql datetime
  SET @unixTimestamp = (SELECT dateadd(ms,CAST(RIGHT(timestmp,3) AS INT),dateadd(ss,CAST(timestmp AS BIGINT)/1000,'01/01/1970')) FROM INSERTED)
       
  --Data as XML for easy parsing
  SET @data = (SELECT formatted_message FROM INSERTED)
  SET @xml = '<Cols><Col>' + REPLACE(@data,'|','</Col><Col>') + '</Col></Cols>'    
  
  --Different Tables for different logger types
  DECLARE @type AS VARCHAR(254)
  SET @type = (SELECT logger_name FROM INSERTED)
  
  IF @type = 'Shibboleth-Audit'
  BEGIN 
	  --Fields to parse 
	   DECLARE @requestbinding AS VARCHAR(100)
	   DECLARE @requestId AS VARCHAR(50)
	   DECLARE @relayingPartyId AS VARCHAR(255)
	   DECLARE @messageProfileId AS VARCHAR(100)
	   DECLARE @assertingPartyId AS VARCHAR(255)
	   DECLARE @responseBinding AS VARCHAR(100)
	   DECLARE @responseId AS VARCHAR(50)
	   DECLARE @principalName AS VARCHAR(8)
	   DECLARE @authNMethod AS VARCHAR(100)
	   DECLARE @releasedAttributeIds AS VARCHAR(255)
	   DECLARE @nameIdentifier AS VARCHAR(100)
	   DECLARE @assertionIDs AS VARCHAR(100)
	   
	   --Store the event_id in the IP address. The event_property trigger
	   --  will replace this with the real IP
	   DECLARE @eventId AS DECIMAL(38,0)
	   SET @eventId = (SELECT event_id FROM INSERTED)

	  SELECT 
		@requestBinding = x.d.value('Col[2]', 'VARCHAR(100)'),
		@requestId = x.d.value('Col[3]', 'VARCHAR(50)'),
		@relayingPartyId = x.d.value('Col[4]', 'VARCHAR(255)'),
		@messageProfileId = x.d.value('Col[5]', 'VARCHAR(100)'),
		@assertingPartyId = x.d.value('Col[6]', 'VARCHAR(255)'),
		@responseBinding = x.d.value('Col[7]', 'VARCHAR(100)'),
		@responseId = x.d.value('Col[8]', 'VARCHAR(50)'),
		@principalName = x.d.value('Col[9]', 'VARCHAR(8)'),
		@authNMethod = x.d.value('Col[10]', 'VARCHAR(100)'),
		@releasedAttributeIds = x.d.value('Col[11]', 'VARCHAR(255)'),
		@nameIdentifier = x.d.value('Col[12]', 'VARCHAR(100)'),
		@assertionIDs = x.d.value('Col[13]', 'VARCHAR(100)')
	  FROM  @xml.nodes('/Cols') x(d)

	  INSERT INTO audit_log (logtime, remoteAddr,requestbinding,requestId,relayingPartyId,messageProfileId,assertingPArtyId,
		responseBinding,responseId,principalName,authNMethod,releasedAttributeIds, nameIdentifier, assertionIds)
		VALUES(@unixTimeStamp,@eventId, @requestBinding,@requestId,@relayingPartyId,
		@messageProfileId,@assertingPartyId,@responseBinding,@responseId,@principalName,
		@authNMethod,@releasedAttributeIds,@nameIdentifier,@assertionIDs)
  END
END
GO 

Since MS SQL has no built-in function for splitting fields on a delimiter, the above function replaces the pipe symbols with opening and closing XML tags so that MS SQL’s built-in XPath engine can be used to parse the field. A conversion must also be done to translate the UNIX timestamp to an MS SQL datetime type. Finally, the event_id is used as a placeholder in the field containing the client’s IP address. This is because by default, Shibboleth’s audit logs do not contain IP information. This can be added however using a custom servlet filter with SLF4J.

SLF4J supports a MDC which allows for values to be mapped to a logger for a given thread instance. Since only one thread in a Java servlet container is used per connection at any given time, this can be used to hold items such as the connecting client’s IP address for logging. The DBAppender inserts these properties in the logging_event_property table. A custom filter can be written to create these properties, but SLF4J comes with the MDCInsertingServletFilter which injects certain common attributes into the MDC such as user-agent, remote host, query string, etc.

In the idp.war, add the following filter to the web.xml.

  <filter>
    <filter-name>MDCInsertingServletFilter</filter-name>
    <filter-class>
      ch.qos.logback.classic.helpers.MDCInsertingServletFilter
    </filter-class>
  </filter>
  <filter-mapping>
    <filter-name>MDCInsertingServletFilter</filter-name>
    <url-pattern>/*</url-pattern>
  </filter-mapping>

The SLF4J documentation recommended adding the MDC as the first filter, but unless attributes are needed for logging within other filters from what the MDC injects, this isn’t always necessary. For the database audit logging, the filter can be placed at the end of the idp.war‘s web.xml file.

Finally, a trigger must be added to handle the properties that are inserted, namely the remote IP address, and update the record of the original log entry.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE TRIGGER [dbo].[setLogIP]
   ON  [dbo].[logging_event_property]
   AFTER INSERT
AS 
BEGIN   
   IF (SELECT mapped_key FROM INSERTED) = 'req.remoteHost'
   BEGIN   
      UPDATE audit_log  SET remoteAddr = (SELECT mapped_value FROM INSERTED)
        WHERE remoteAddr = CONVERT(varchar(255),(SELECT event_id FROM INSERTED))
   END   
END
GO

The DBAppender is transaction based, so if any triggers or query statements fail (e.g. if permissions are not setup correctly for the database user), the entire logging transaction will be rolled back and no results will appear in the database table. It is best to implement this is stages, starting with getting the DBAppender working and then adding the new table and triggers.

Since the standard text based log files do not include an IP address, it is best to include this new attribute in the original files as well. This can be done by modifying the RollingFileAppender for the audit log. The following example adds the IP address and also sets the rollover for the log file to 1 year (365 days). This should be adjusted to the retention requirements of the institution’s legal department.

    <appender name="IDP_AUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <File>/opt/shibboleth-idp/logs/idp-audit.log</File>

        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <FileNamePattern>/opt/shibboleth-idp/logs/idp-audit-%d{yyyy-MM-dd}.log</FileNamePattern>
            <maxHistory>365</maxHistory>
        </rollingPolicy>

        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <charset>UTF-8</charset>
            <Pattern>%X{req.remoteHost}|%msg%n</Pattern>
        </encoder>
    </appender>

Having audit logs in a database makes it convenient to aggregate Shibboleth authentication requests for reports, as well as retrieve information quickly for legal and security requests. It is recommended to also keep the RollingFileAppender to use as a backup in case of problems with the database connection.


RearViewMirror 0.8.8.6 Released

$
0
0

It’s been nearly two and a half years since the last release of RearViewMirror. There aren’t any new features on this release, just several updates to help improve speed and to check for future releases.

For those who have never used it before, RearViewMirror is an over-glorified version of those mirrors office workers attach to their monitors so people don’t sneak up on them. Instead of using a mirror though, it uses a webcam and allows users to share their webcams with others around the office.

Making the Thunderbird mail icon more useful on MacOS

$
0
0

Up until Mozilla Thunderbird 3.0, the MacOS version would show you the number of new messages in Thunderbird since you last clicked on the window, in the dock icon. Now by default it shows you the total number of unread messages, which is pretty useless if there are just a ton of messages in your bulk mail accounts you have no intention of reading. A bug report was filed for this issue, in order to restore the old functionality (or at least make it optional). Although the bug was closed and the option was added to Thunderbird, it’s not in the main user interface. It must be set using the advanced configuration editor.

The basic steps to setting this option is as follows:

1. Open Thunderbird>Preference>Advanced>General>Config Editor

Open Preferences

Thunderbird Advanced Configuration Editor

2. type: mail.biff.use_new_count_in_mac_dock and change setting to true

User New Mail Count in Dock Configuration Setting

That’s all there is to it. Now your Thunderbird dock icon will only display the number of new messages since you last opened an e-mail, similar to the mail notification icon in other clients including Microsoft Outlook.

RearViewMirror 0.8.9.3 Released

$
0
0

It’s been a little over a year since I lasted updated RearViewMirror. The new version contains the following features:

  • Options, both global options and individual camera options
  • Ability to play an audio file (wav) when motion is detected
  • Ability to record motion that is detected

RearViewMirror is an application developed to be a fancy cubical mirror. It can support multiple camera and MJPEG sources and pop up a window when motion is detected.

Signature Verification Between Java and Python

$
0
0

Using public/private key pairs to digitally sign text is a common way to validate the authenticity of a piece of data. However dealing with RSA keys, certificates and signatures can often be a bit overwhelming, especially between two different languages or platforms. This tutorial will demonstrate how to use RSA keys to generate and validation signatures in both Java and Python 2.

For this demonstration, we’ll use two additional libraries. For Java, we’ll use Legion of the Bouncy Castle (seriously) and for Python we’ll use M2Crypto. I will not cover using pycrypto. I spent hours looking at many tutorials as well as pycrypto’s documentation and I could never get it to correctly generate or sign a digest in a way that could be verified by any other cryptology library. If anyone has working pycrypto examples, please either e-mail me or post them in the comments and I’ll update this tutorial accordingly.

Although it is possible to do RSA signature verification in the stock Java 1.6 environment, using the Bouncy Castle Provider (bcprov) allows for importing keys in the PEM files, a base64 encoded text files for X509 certificates with plain text anchor lines indicating the key types. Without bcprov, a key tool would be needed to convert the key(s) in the PEM file into the DER format that’s supported natively in Java.

First, we’ll look at signing a piece of data using Python. The following example takes two command line arguments, one is the key file and the second is the data to sign. The data file must exist, but if the key file does not, it will simply generate a new RSA private key and save it to the file in PEM format.

#!/usr/bin/env python2

from M2Crypto import EVP, RSA, X509
import sys
import base64
from os import path

# Sumit Khanna - PenguinDreams.org
#   Free for educational and non-commercial use

if __name__ == '__main__':

  if len(sys.argv) != 3:
    sys.stderr.write('Usage ./pysign.py <pem file> <data file to sign>\n')
    sys.exit(1)

  pemFile = sys.argv[1]
  dataFile = sys.argv[2]

  if not path.isfile(dataFile):
    sys.stderr.write('Data file does not exist\n')

  if not path.isfile(pemFile):
    sys.stderr.write('PEM file does not exist. Generating\n')

    #keysize in bits is 2048, RSA public exponent is 65537
    # Callback supresses ....++ output on key generation
    key = RSA.gen_key(2048, 65537,callback=lambda x, y, z:None) 

    #Using a cipher of None prevents being prompted for a passphrase
    # A callback function can also be supplied
    key.save_pem(pemFile, cipher=None)


  key = EVP.load_key(pemFile)
  key.reset_context(md='sha1')
  key.sign_init()
  key.sign_update(open(dataFile,'r').read())

  #Signatures are binary, so we base64 encode the result for portability
  print(base64.b64encode(key.sign_final()))

Next, we’ll create a Java class to verify the signature. It takes in the same arguments for a key file and a data file, however in this case the key must exist. The signature can be read from a file using the third argument, or taken from standard in. This allows us to chain the two programs together as we’ll see later.

/*
 * Sumit Khanna - PenguinDreams.org
 *   Free for educational and non-commercial use
 */
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.security.KeyPair;
import java.security.PublicKey;
import java.security.Security;
import java.security.Signature;
import javax.xml.bind.DatatypeConverter;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.openssl.PEMReader;


public class JavaVerify {

	  public static PrintStream out = System.out;
	  public static PrintStream err = System.err;
	  
	  public static void main(String[] args) throws Exception {
		  
	    if(args.length < 2 || args.length > 3) {
	      err.println("Usage: java JavaVerify  <pem file> <data file to verify> [signature file]");
	      err.println("\tIf no signature file is given, signature is taken via stdin");
	      System.exit(1);
	    }
	    
	    File pemFile = new File(args[0]);
	    File dataFile = new File(args[1]);
	    
	    if(!dataFile.exists()) {
	    	err.println("Data File Does Not Exist");
	    	System.exit(1);
	    }
	    
	    if(!pemFile.exists()) {
	    	err.println("PEM File Does Not Exist");
	    	System.exit(1);
	    }
	    
	    //BC Provider initalization
	    Security.addProvider(new BouncyCastleProvider());
	    PEMReader pemReader = new PEMReader(new FileReader(pemFile));
	    PublicKey pubKey = ((KeyPair) pemReader.readObject()).getPublic();
	    
	    //load public key
	    Signature sg = Signature.getInstance("SHA1withRSA");
	    sg.initVerify(pubKey);
	    
	    //read data file into signature instance
	    FileInputStream fin = new FileInputStream(dataFile);
	    byte[] data = new byte[(int) dataFile.length()];
	    fin.read(data);
	    sg.update(data);
	    
	    //read signature from file
	    byte[] signature = new byte[0];
	    if(args.length == 3) {
	    	
	    	File sigFile = new File(args[2]);
	    	if(!sigFile.exists()) {
	    		err.println("Signature file could not be found");
	    		System.exit(1);
	    	}
	    	
	    	fin = new FileInputStream(sigFile);
	    	signature = new byte[(int) sigFile.length()];
	    	fin.read(signature);
	    }
	    //read signature from standard in
	    else {
	        BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
	        signature = in.readLine().getBytes();
	    }
	    
	    //validate signature
	    if(sg.verify(DatatypeConverter.parseBase64Binary(new String(signature)))) {
	    	out.println("Signature Verified");
	    	System.exit(0);
	    }
	    else {
	    	out.println("Signature Verification Failed");
	    	System.exit(2);
	    }
	    
	  }
	
}

To test these two programs, I’ve created a file called sample_data.xml that contains some very basic XML data:

<?xml version="1.0" ?><SomeRandom><XML xml="data"/></SomeRandom>

Now, we can simply chain these two programs together to sign and verify data. Please note that for this to work, M2Crypto must already be present in your Python 2 installation. If you’re running Linux, most distributions have M2Crypto in their package manager, or you may install M2Crypto manually. The Bouncy Castle Provider must also be in the classpath as shown in the following example:

export CLASSPATH=".:bcprov-jdk15-140.jar"
javac JavaVerify.java
./pysign.py key.pem sample_data.xml | java JavaVerify key.pem sample_data.xml 

We will get the result Signature Verified after running these commands. You can try generating a second signature and using the two different signatures between the pysign and JavaVerify programs as well.

./pysign.py key1.pem sample_data.xml
./pysign.py key2.pem sample_data.xml| java JavaVerify key1.pem sample_data.xml

This will result in a Signature Verification Failed.

It is also possible to generate the signature in Java with the following code.

/*
 * Sumit Khanna - PenguinDreams.org
 *   Free for educational and non-commercial use
 */
import java.io.File;
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.PrintStream;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.SecureRandom;
import java.security.Security;
import java.security.Signature;

import javax.xml.bind.DatatypeConverter;

import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jce.provider.JDKKeyPairGenerator;
import org.bouncycastle.openssl.PEMReader;
import org.bouncycastle.openssl.PEMWriter;

class JavaSign {

  public static PrintStream out = System.out;
  public static PrintStream err = System.err;

  public static void main(String[] args) throws Exception {
	  
    if(args.length != 2) {
      err.println("Usage: java JavaSign <pem file> <data file to sign>");
      System.exit(1);
    }
    
    File pemFile = new File(args[0]);
    File dataFile = new File(args[1]);
    
    if(!dataFile.exists()) {
    	err.println("Data File Does Not Exist");
    	System.exit(1);
    }
    
    Security.addProvider(new BouncyCastleProvider());
    KeyPair keys = null;
    
    if(!pemFile.exists()) {
    	
    	err.println("PEM File Does Not Exist. Generating.");
    	KeyPairGenerator r = KeyPairGenerator.getInstance("RSA");
    	
    	//keysize in bits is 2048
    	r.initialize(2048,new SecureRandom());
    	keys = r.generateKeyPair();
    	PEMWriter pemWriter = new PEMWriter(new FileWriter(pemFile));
    	pemWriter.writeObject(keys);
    	pemWriter.close(); //You must flush or close the file or else it will not save
    }
    else {
    	keys = (KeyPair) new PEMReader(new FileReader(pemFile)).readObject();
    }
    
    //read data file into signature instance
    FileInputStream fin = new FileInputStream(dataFile);
    byte[] data = new byte[(int) dataFile.length()];
    fin.read(data);
    
    //Sign the data
    Signature sg = Signature.getInstance("SHA1withRSA");
    sg.initSign(keys.getPrivate());
    sg.update(data);
    
    //output base64 encoded binary signature 
    out.println(DatatypeConverter.printBase64Binary(sg.sign()));

  }

}

The following Python code can then be used to verify the signature:

#!/usr/bin/env python2

# Sumit Khanna - PenguinDreams.org
#   Free for educational and non-commercial use

from M2Crypto import EVP, RSA, X509
import sys
import base64
from os import path
import fileinput


if __name__ == '__main__':

  if len(sys.argv) < 3 and len(sys.argv) > 4:
    sys.stderr.write('Usage ./pyverify.py <pem file> <data file to verify> [signature file]\n')
    sys.stderr.write('\tIf no signature file is given, signature is taken via stdin\n')
    sys.exit(1)

  pemFile = sys.argv[1]
  dataFile = sys.argv[2]
  sigFile = sys.argv[3] if len(sys.argv) == 4 else '-'

  key = EVP.load_key(pemFile)

  if not path.isfile(pemFile):
    sys.stderr.write('PEM File could not be found\n')
    sys.exit(2)

  if not path.isfile(dataFile):
    sys.stderr.write('Data File could not be found\n')

  for line in fileinput.input(sigFile):
    key.reset_context(md='sha1')
    key.verify_init()
    key.verify_update(open(dataFile,'r').read())
    if key.verify_final(base64.b64decode(line)):
      print "Signature Verified"
      sys.exit(0)
    else:
      print "Signature Verification Failed"
      sys.exit(2)

These programs can all be chained together to sign data and verify signatures

export CLASSPATH=".:bcprov-jdk15-140.jar"
javac JavaSign.java JavaVerify.java
./pysign.py key1.pem sample_data.xml| java JavaVerify key1.pem sample_data.xml 
java JavaSign key1.pem sample_data.xml| ./pyverify.py key1.pem sample_data.xml 
./pysign.py key1.pem sample_data.xml| ./pyverify.py key1.pem sample_data.xml 
java JavaSign key1.pem sample_data.xml| java JavaVerify key1.pem sample_data.xml 

There are some important things to note in these examples. For one, we’re using a shared private key. The public key can always be generated from the private key, but the reverse cannot be done. Typically in a production environment, only the public key is accessible to the service that is responsible for verification and the private key is saved to the application which generates and signs the data. It is possible to save the public and private keys separately using the Bouncy Castle and M2Crypto API.

Another thing to note is that, at the time this writing, M2Crypto only supports Python 2. If your project is built on Python 3, you’ll have to find another cryptology library.

The source code for all of the mentioned examples is available for download:

PenguinDreams-SignatureVerification.zip

PenguinDreams-SignatureVerification.tar.bz2

Reassigning DNS Entries in Windows/Active Directory using Powershell

$
0
0

Typically, internal DNS entries for websites must be different than the external addresses due to NAT issues. If your organization has a lot of web sites that exist on either a single server or a set of identical servers behind a load balancer, it’s best practice to have all DNS entries be CNAME records to either that particular server’s DNS entry or the entry for a server farm’s load balancer. Recently I was involved in a mass server migration where actual IPs were used throughout a Windows DNS server. The following is a Power Shell script designed to rename DNS records in mass on an Active Directory Domain Controller.

First, I’d like to thank Ansgar Wiechers for his help on this problem and well as Chris Dent’s very helpful powershell/dns post. In the following example, machines were being moved from the 192.168 subnet into the 10.19 subnet. This script will need to be modified for your needs obviously and it comes with no guarantees or warranties what so ever, so you run it entirely at your own risk. This was a very quick script written during a high stress server move, and we still had problems that may or may not have been related to this script afterwords.

I’ve just posted it up here because I couldn’t find any good examples of this anywhere. I knew someone else, somewhere down the line, would run into a very similar situation. On a side note, if this particular organization was using BIND or another file based DNS server instead of Windows / Active Directory, this entire operation could have been done with a series of simple search and replaces.


  #The name of your Active Directory / DNS server
  $dnsServer = "myExampleAD"

  $scope = New-Object Management.ManagementScope("\\$dnsServer\root\MicrosoftDNS")
  $path = New-Object Management.ManagementPath("MicrosoftDNS_Zone")
  $options = New-Object Management.ObjectGetOptions($Null,[System.TimeSpan]::MaxValue, $True)
  $ZoneClass= New-Object Management.ManagementClass($scope,$path,$options)
  $Zones = Get-WMIObject -Computer $dnsServer -Namespace "root\MicrosoftDNS" -Class "MicrosoftDNS_Zone" 

  foreach($Z in $Zones) {
   if (( $Z | Select-Object -expand Reverse) -ne "True") {
    $domain = $Z | Select-Object Name
    $dname = $domain.Name
    dnscmd /EnumRecords $domain.Name `@ /type A | % {
      $name = $_.split(" ")[0]
      $ip = $_.split("`t")[-1] 

      #The IP records for our web servers. These will need to be changed
      # depending on your situation

      if (($ip.Contains("192.168.0.14")) -or 
         ($ip.Contains("192.168.0.80")) -or
         ($ip.Contains("192.168.0.220")) -or
         ($ip.Contains("192.168.0.229")) -or
         ($ip.Contains("192.168.0.230")) -or
         ($ip.Contains("192.168.0.241")) -or
         ($ip.Contains("192.168.0.16")) -or
         ($ip.Contains("192.168.0.140")) -or
         ($ip.Contains("192.168.0.221")) -or
         ($ip.Contains("192.168.0.224")) -or
         ($ip.Contains("192.168.0.225")) -or
         ($ip.Contains("192.168.0.226")) -or
         ($ip.Contains("192.168.0.227")) -or
         ($ip.Contains("192.168.0.233")) -or
         ($ip.Contains("192.168.0.234")) -or
         ($ip.Contains("192.168.0.235"))      
         ) {

           #Here we see the ranges we are moving from: 192.168 to 10.19. 

           $ip =  $_.split("`t")[-1]       
           $new = $_.split("`t")[-1] -replace "192.168", "10.19"
           echo "dnscmd $dnsServer /recorddelete $dname $name A"

           # you can uncomment the /f switch below if you're brave. Otherwise, you'll be prompted for each delete

           dnscmd $dnsServer /recorddelete $dname $name A # /f 
           echo "dnscmd $dnsServer /recordadd $dname $name A $new"
           dnscmd $dnsServer /recordadd $dname $name A $new 
         }
      }    
    }
  }

Obviously this was very quick and dirty and I wanted to minimize the changes being made. A better solution would be to create CNAME records for each of those A records instead, pointing to the correct DNS entry for each IP address listed. However, this example should be a decent starting point for similar types of mass changes on Windows based DNS systems.

Copy Reddit Subscriptions from One Account to Another

$
0
0

For those of you who use Reddit, if you’ve ever wanted to abandon one account for another, or just copy your subscriptions between your primary and your throw-away account, I’ve written a Python script called copy_reddit. I uses the praw API to copy both subscriptions and friends lists.

Invalid partition table in VMWare ESX

$
0
0

Anyone who has expanded a drive in Linux knows it’s a two step process. First, the partition table must be altered to include the new space. Second, the file system must be expanded to make use of the new space within its partition. It’s a fairly straightforward process I’ve done many times, but I ran into an interesting issue when attempting this within VMWare.

After deleting and recreating the partition table, where the partition to be expanded has the same start block and a new end block, fdisk might state that the partition is in use and the kernel cannot refresh partition data. At this point, the partprobe command can be used or the system can be rebooted to read the new partition information. After rebooting my VM, I saw the following message: Invalid partition table.

VMWare: Invalid Partition Table

For some reason, I was unable to get vSphere to boot a recovery ISO, so I had to shut down the VM and attach it’s volume to another active VM to try and diagnose what went wrong. The problem was that I didn’t set the active partition.

Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        7832    60806025   83  Linux

Command (m for help): Command (m for help): 

That little star(*) was missing and prevented VM Ware from booting this partition. I haven’t run into this before because on a physical machine, most BIOS and EFI, if they cannot find an active boot partition, will just aggressively attempt to find a bootable partition before giving up. VMWare does not.

The behavior of VMWare is more predictable from both a security standpoint and just a general correctness standpoint. Still, surprises like this aren’t welcome during late night scheduled outages. So always remember to make snapshots before modifying disks on any critical production systems.


Rear View Mirror v1.0 Released

$
0
0

Rear View Mirror v1.0 is now out. It has an improved option dialog, cleaner update system and MSI installers for both the 32-bit and 64-bit versions. It also has a new website: RearViewMirror.cc.

Big Sense: REST Web Services in Scala for Sensor Networks, Wellington Java User Group

dyject – Python Dependency Injection

$
0
0

The first release of dyject is now out. Dyject is a simple dependency injection module for both Python 2 and Python 3. It had no dependencies outside of the standard Python library and uses a configuration parser to construct and wire objects. You can download packages from pypi, get the source code from githib or view the full instructions and documentation on dyject.com

LtSense – Using Embedded Python for Sensor Networks, Wellington Python User Group

Removing the Tracking Image from Alfresco

$
0
0

Alfresco is an enterprise document management system. There is a free Community Edition that is open source, but its web interface pulls in an image from their official website that can be used to remotely track usage. This tracking image is added via Javascript and cannot be removed by simply changing a template. It is hard coded into a core class. This tutorial goes through the steps needed to patch the Alfresco WAR file in order to remove the tracking image. It has been tested for Alfresco 4.2c and may need to be adjusted for other versions.

Patching

A simple patch is available on github with both source and compiled class files. Simply clone the git repository, copy the share.war into the repository directory and run the patch. Then redeploy the share.war to your application server.

git clone https://github.com/sumdog/alfresco-tracking-removal
cd alfresco-tracking-removal
cp /path/to/share.war .
./patch.sh

Testing

After logging into Alfresco Share, viewing the source code of the main dashboard will show several Javascript files in the header. One is called messages_XXXXX.js where XXXXXX is a generated unique ID.

source code showing Alfresco Tracking Image

Viewing this specific Javascript file shows us the tracking image that’s injected onto the page in the fourth line of code.

Alfresco Tracking Image jQuery Injection

After our patch is applied, viewing this same Javascript file will show that the line of code that injects the tracking image is now gone.

Alfresco Tracking Image Removed

How it Works

To remove the tracking image, the class we need to modify is an org.springframework.extensioons.webscripts.MessageWebScript object. The following is a straight copy of the default version found in the Alfresco SDK with the tracking image lines commented out:

/*
 * Copyright (C) 2005-2010 Alfresco Software Limited.
 * This file is part of Alfresco
 *
 * Modified to get rid of the tracking PNG - Sumit <sumit@penguindreams.org>
 */
package org.penguindreams.alfresco;

/**
 * WebScript responsible for returning a JavaScript response containing a JavaScript
 * associative array of all I18N messages name/key pairs installed on the web-tier.
 * <p>
 * The JavaScript object is created as 'Alfresco.messages' - example usage:
 * 
 * var msg = Alfresco.messages["messageid"];
 * 
 *
 * @author Kevin Roast
 */
public class TrackingImageRemovalMessagesWebScript extends org.springframework.extensions.webscripts.MessagesWebScript
{
    /**
     * Generate the message for a given locale.
     *
     * @param locale    Java locale format
     *
     * @return messages as JSON string
     *
     * @throws IOException
     */
    @Override
    protected String generateMessages(WebScriptRequest req, WebScriptResponse res, String locale) throws IOException
    {
        Writer writer = new StringBuilderWriter(8192);
        writer.write("if (typeof Alfresco == \"undefined\" || !Alfresco) {var Alfresco = {};}\r\n");
        writer.write("Alfresco.messages = Alfresco.messages || {global: null, scope: {}}\r\n");
        writer.write("Alfresco.messages.global = ");
        JSONWriter out = new JSONWriter(writer);

        try
        {
            out.startObject();
            Map<String, String> messages = I18NUtil.getAllMessages(I18NUtil.parseLocale(locale));
            for (Map.Entry<String, String> entry : messages.entrySet())
            {
                out.writeValue(entry.getKey(), entry.getValue());
            }
            out.endObject();
        }
        catch (IOException jsonErr)
        {
            throw new WebScriptException("Error building messages response.", jsonErr);
        }
        writer.write(";\r\n");

        // start logo 
        // community logo

        //Sumit - PenguinDreams - Edited to remove tracking image.
        //  It's in two places; removed below as well

        //final String serverPath = req.getServerPath();
        //final int schemaIndex = serverPath.indexOf(':');
        //writer.write("window.setTimeout(function(){(document.getElementById('alfresco-yuiloader')||document.createElement('div')).innerHTML = '<img src=\"");
        //writer.write(serverPath.substring(0, schemaIndex));
        //writer.write("://www.alfresco.com/assets/images/logos/community-4.0-share.png\" alt=\"*\" style=\"display:none\"/>\'}, 100);\r\n");
        // end logo

        return writer.toString();
    }

    @Override
    protected String getMessagesPrefix(WebScriptRequest req, WebScriptResponse res, String locale) throws IOException
    {
        return "if (typeof Alfresco == \"undefined\" || !Alfresco) {var Alfresco = {};}\r\nAlfresco.messages = Alfresco.messages || {global: null, scope: {}}\r\nAlfresco.messages.global = ";
    }

    @Override
    protected String getMessagesSuffix(WebScriptRequest req, WebScriptResponse res, String locale) throws IOException
    {
        StringBuilder sb = new StringBuilder();
        sb.append(";\r\n");

        //Sumit - PenguinDreams - removed; see above

        // start logo 
        // community logo
        //final String serverPath = req.getServerPath();
        //final int schemaIndex = serverPath.indexOf(':');
        //sb.append("window.setTimeout(function(){(document.getElementById('alfresco-yuiloader')||document.createElement('div')).innerHTML = '<img src=\"");
        //sb.append(serverPath.substring(0, schemaIndex));
        //sb.append("://www.alfresco.com/assets/images/logos/community-4.0-share.png\" alt=\"*\" style=\"display:none\"/>\'}, 100);\r\n");
        // end logo
        return sb.toString();
    }
}

You’ll notice in the above Java file, the tracking image appears in two places and is commented out in both. In order to replace the default version of this WebScript class with our custom version, we’ll have to modify the custom application context file custom-sliingshot-application-context.xml. This file can be found in the standard Alfresco stand-alone package.

<?xml version='1.0' encoding='UTF-8'?>

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:hz="http://www.hazelcast.com/schema/config"
       xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans-2.5.xsd


http://www.hazelcast.com/schema/config


http://www.hazelcast.com/schema/config/hazelcast-spring.xsd">

    
    <bean id="webscript.org.springframework.extensions.messages.get" parent="webscript" class="org.penguindreams.alfresco.TrackingImageRemovalMessagesWebScript">
        <property name="webFrameworkConfigElement" ref="webframework.config.element"/>
        <property name="dependencyHandler"         ref="dependency.handler"/>
    </bean>

    <bean id="webscript.org.springframework.extensions.messages.post" parent="webscript" class="org.penguindreams.alfresco.TrackingImageRemovalMessagesWebScript" />

</beans>

The original class is used as two different dependencies in the standard slingshot-application-context.xml, so adding the two bean definitions above will cause the Alfresco Spring Context loader to pull in our new beans from the custom context and override those that are in the default context.

Since this custom application context resides outside the WAR file in the shares/classes/alfresco/web-extension directory within Tomcat, one would think you could just add both this XML file and the compiled class into the shares/classes folder and Tomcat apply those classes in the class loading process. Unfortunately, our custom MessageWebScript is dependent on many jars located within the Alfresco web application. Those jars are not available outside the WAR file and must be replicated on the Tomcat classpath for TrackingPixelRemovalMessagesWebScript to work. Therefore, it’s easier to compile the class and add both it and the application context directly to the war file itself.

Building

You might want to build the patch file yourself. When I initially built the TrackingPixelRemovalMessagesWebScript Java class, I had the Alfresco Maven development environment setup. Within that environment, I place the two files mentioned like so:

share/src/main/java/org/penguindreams/alfresco/TrackingImageRemovalMessagesWebScript.java
share/src/main/resources/alfresco/web-extension/custom-slingshot-application-context.xml

After the files are placed within those locations in your Maven build, doing an mvn install will produce a WAR file with the appropriate compiled TrackingImageRemovalMessagesWebScript.class. It is possible to build the required class file without a maven environment. After all, it’s just one java file. However, I had trouble assembling all the dependencies required, such as the Spring Surf libraries, which are currently still in incubator and currently have no release version (Alfresco uses the snapshots as dependencies for its production release).

Final Notes

Alfresco is an example of an open source project which doesn’t really encompass any of the philosophies behind the free software movement. It’s a commercial product which releases a slightly crippled open source version in order to gain free improvements from a wider community. It’s documentation isn’t always clear or up to date, and developing within it can prove quite challenging. The tracking image should be an optional, opt-in flag, rather than a hard-coded element that is intentionally difficult to remove. These instructions may need to be updated for future versions of Alfresco. Comments and pull requests to the git repository are welcome.




Discoverying Friend List Changes on Facebook with Python

$
0
0

Unfriendfinder was a Firefox plugin which allowed Facebook users to detect when people left their friend list or deactivated their accounts. After three years of development, Facebook requested the removal of the extension due to violation of their terms of service. The author chose not to fight the request. In response, I’ve created sumfriender, a Python script that can detect friend list changes as well as import previous friend lists from the Unfriendfinder Firefox plugin and grease monkey scripts.

Facebook claims that Unfriendfinder violated their terms of service because it altered the interface between the user and Facebook to add in new unauthorized features. But Unfriendfinder isn’t a piece of malware. It started off as a Greasemonkey script, for which there are thousands out there designed to change the way people interact with websites that they use. It’s something users choose to install to alter the way the information they receive is presented to them or interacted with.

Takedown notice on unfriendfinder.com

The following script accomplishes the same ends. In order to use it, you will need a web server to place the token capture file, Python 3 and a basic understanding of how Python scripts and Facebook authorization work. First, place the fb_token.html file on a web server. When you authorize this script, Facebook will redirect you to this file and its simple Javascript will display the token you’ll need for authorization.

<!DOCTYPE html>
<html lang="en">
<body>
<p>FB Response <span id="token"></span></p>

</body>
</html>

Facebook recommends using https://www.facebook.com/connect/login_success.html as the redirect URL for authorization of desktop apps. Although the auth token is added to the URL, the login_success.html immediately redirects to block it out. It’s meant to be run within an embedded web browser in a desktop application where the token can be picked up programmatically before the redirect. It goes by too fast for a human to copy, so for a command line app, we need to use the fb_token.html to capture it.

Upload this file to a web server and then place the URL in the sumfriender.config file. Make sure this URL is setup in your Facebook app as a valid OAuth redirect. You will also need to add your Facebook API key and secrete as APP_ID and APP_KEY respectively:

[FB_API]
APP_ID=1212121212121212
APP_KEY=abababababababababababababababab
OAUTH_TOKEN=
REDIRECT_URI=https://example.com/fb_token.html

Facebook App Information

In the advanced section of your Facebook application settings, you’ll need to add the address to the fb_token.html file on your web server.

Facebook OAuth URL

If you used the previous Unfriendfinder Greasemonkey extension or the Firefox plugin, you can import your old friend database. To do so, search for either a prefs.js file or a greasemonkey-prefs.uff.js in your Firefox application data directory. This is typically in ~/.mozilla/firefox/xxxxxxxx.default on Linux or C:\Users\[Username]\AppData\Mozilla\Firefox\Profiles\xxxxxxxx.default on Windows. Run the preferences javascript through extract_uff.py to extract the Unfriendfinder data into text files.

Source code for extract_uff.py on GitHub

#!/usr/bin/env python3
#
#  extract_uff.py  -  Extracts friends/unfriends list from previous 
#                     versions of the UnfriendFinder GreaseMonkey script 
#                     and FireFox Plugin
#
#  Sumit Khanna <sumit@penguindreams.org> - http://penguindreams.org
#
#  License: Free for non-commercial use
#


import sys
import re
import json 
import time
import os

def format_json(json_obj):
  obj = json.loads(json_obj)
  ret = []
  for f in obj.items():
    ret.append( "{0:15}  {1}".format(f[0],  str(f[1]['name'].encode('utf-8'),'ascii','ignore')   ) )
  return ret

def save(name,lst):
  i = 1
  while os.path.exists(name):
    name = '{0}.{1}'.format(name,i)
    i += 1
    
  print("Writing {0}".format(name))
  fd = open(name,'w')
  for i in lst:
    fd.write("{0}\n".format(i))
  fd.close()

def user_pref(key,json_obj):
  section = key.split('_')[-1:][0]
  if section == 'unfriends':
    save('unfriends.txt',format_json(json_obj))  
  if section == 'deactivated':
    save('deactivated.txt',format_json(json_obj))
  if section == 'friends':
    save('friends.txt',format_json(json_obj))


if __name__ == '__main__':

  if len(sys.argv) < 2:
    print('Usage: extract_uff.py <prefs.js|greasemonkey-prefs.uff.js>')
    exit(2)


  fd = open(sys.argv[1], "r", encoding='utf-8')

  for line in fd:
    eline = line
    if re.search('extensions.greasemonkey.scriptvals.unfriend_finder',eline):
      eval(eline.strip().split(';')[0])
      

Running extract_uff.py

$ ./extract_uff.py ~/.mozilla/firefox/abcdefg.default/prefs.js 
Writing deactivated.txt
Writing friends.txt
Writing unfriends.txt
Writing deactivated.txt.1
Writing friends.txt.1
Writing unfriends.txt.1

If you had multiple people using the same web browser while logging into Facebook, you might get multiple files as shown above. The friends.txt is what will be used going forward, so select the correct one for your account and move it to friends.txt in the directory you will execute sumfriends.py from. The other files are there for your reference.

sumfriends.py is the script that will grab your friends list. Upon the first run, it will open a web browser for you to authorize the application. If you’ve setup your application, redirect URL and fb_token.html correctly, you will get a token you can place in the configuration file. This token will expire very quickly, so ever subsequent time the script is run, it will exchange the token for one with a longer expiration time and save it in the configuration file. It will then show you any friends that are no longer in your list, output their details to the screen, save those details to the status.txt file and update the friends.txt file with your current friends list.

Source code for sumfriends.py on GitHub

#!/usr/bin/env python3

"""
   sumfriender.py  -  A script for detecting changes in your Facebook friend list

     Copyright 2013 - Sumit Khanna - PenguinDreams.org

     Free for non-commercial use

"""

import time

import configparser
import urllib.request
import urllib.parse
import webbrowser
import json
import argparse
import os
import sys
import time

class Facebook(object):

  def __init__(self,config_file):
    config = configparser.ConfigParser()
    config.read(config_file)
    self.fb_app = config.get('FB_API','APP_ID')
    self.fb_key = config.get('FB_API','APP_KEY')
    self.redirect_uri = config.get('FB_API','REDIRECT_URI')
    self.oauth_token = config.get('FB_API','OAUTH_TOKEN')

    self.config_file = config_file
    self.config_parser = config

  def requires_auth(self):
    return self.oauth_token.strip() == '' 

  
  def __fb_url(self,path,vars):
    return 'https://graph.facebook.com/{0}?{1}'.format(
      path,
      urllib.parse.urlencode(vars))

  def __make_request(self,url):
    #print(url)
    with urllib.request.urlopen(url) as html:
      return str( html.read() , 'ascii' , 'ignore' )

  def login(self):
    webbrowser.open(self.__fb_url('oauth/authorize',
      { 'type' : 'user_agent' , 
        'client_id' : self.fb_app , 
        'redirect_uri' : self.redirect_uri,
        'response_type' : 'token' ,
        'scope' : 'user_friends'
      }
    ))

  def __friends_as_dict(self,obj):
    ret = {}
    for f in obj:
      ret[f['id']] = f['name']
    return ret

  def friend_list(self):
    obj = json.loads((self.__make_request(self.__fb_url('me/friends',
      { 'access_token' : self.oauth_token }
    ))))
    friends = self.__friends_as_dict(obj['data'])
    while 'paging' in obj and 'next' in obj['paging']:
      obj = json.loads(self.__make_request(obj['paging']['next']))
      if len(obj['data']) > 0:
        friends.update(self.__friends_as_dict(obj['data']))
    return friends

  def user_active(self,uid):
    try:
      obj = json.loads(self.__make_request(self.__fb_url(uid,
        { 'access_token' : self.oauth_token }
      )))
      return 'id' in obj
    except urllib.error.HTTPError:
      return False

  def extend_token(self):
    "Requests a new OAUTH token with extended expiration time and saves it to the config file"
    token = urllib.parse.parse_qs(self.__make_request(self.__fb_url('oauth/access_token',{
      'client_id' : self.fb_app ,
      'client_secret' : self.fb_key ,
      'grant_type' : 'fb_exchange_token' ,
      'fb_exchange_token' : self.oauth_token
    })))['access_token'][0]

    self.config_parser.set('FB_API','OAUTH_TOKEN',token)
    fd = open(self.config_file,'w+')
    self.config_parser.write(fd)
    fd.close()


class StatusWriter(object):

  def __init__(self,status_file,stdout=False):
    self.__fd = open(status_file,'a')
    self.__screen = stdout

  def write(self,line):
    self.__fd.write('{0}\n'.format(line))
    if self.__screen:
      print(line)

  def __del__(self):
    self.__fd.close()


def load_old_friends(data_file):
  oldfriend = open(data_file,'r')
  data = {}
  for of in oldfriend:
    parts = of.split(" ")
    data[parts[0]] = " ".join(parts[1:]).strip()
  return data
  
def save_friends(data_file,list):
  fd = open(data_file,'w')
  for f in list:
    fd.write( "{0:15}  {1}\n".format(f,  str(list[f].encode('utf-8'),'ascii','ignore')   ) )
  fd.close()



if __name__ == '__main__':


  parser = argparse.ArgumentParser(
    description='A script to scan for changes in Facebook friend statuses',
    epilog='Copyright 2013 Sumit Khanna. Free for non-commercial use. PenguinDreams.org')
  #usage='%prog [-c config file] [-f friends file] [-s status file]'

  parser.add_argument('-v','--version', action='version', version='%(prog)s 0.1')
  parser.add_argument('-c',help='configuration file with API/AUTH keys [default: %(default)s]',
    default='sumfriender.config',metavar='config')
  parser.add_argument('-f',help='friend list db file [default: %(default)s]',
    default='friends.txt',metavar='friend_db')
  parser.add_argument('-l',help='status log file [default: %(default)s]',
    default='status.txt',metavar='log')
  parser.add_argument('-s',help='supress writing status to standard out', action='store_true')

  args = parser.parse_args()

  if not os.path.exists(args.c):
    print('Configuration file {0} does not exist'.format(args.c), file=sys.stderr)
    sys.exit(2)

  fb = Facebook(args.c)
  if fb.requires_auth():
    print("You need a login key. Copy your access token to the OAUTH_TOKEN field in the configuration file.",file=sys.stderr)
    fb.login()
    sys.exit(3)
  else:

    #Let's renew our token so it doesn't expire
    fb.extend_token()

    cur_friends = fb.friend_list()  

    if not os.path.exists(args.f):
      print("{0} not found. Creating initial friends list".format(args.f))
      save_friends(args.f,cur_friends)
    else:

      old_friends = load_old_friends(args.f)
      out = StatusWriter(args.l, not args.s)
      heading = False

      for uid in old_friends:
        if uid not in cur_friends:

          if not heading:
            date = time.strftime("%Y-%m-%d %H:%M:%S")
            out.write(date)
            out.write('----------------------')
            heading = True

          status = 'is no longer in your friends list' if fb.user_active(uid) else 'has been deactivated or has disabled application access'
          output = "Friend {0} ({1}) {2}".format(old_friends[uid],uid,status)

          out.write(output)

      if heading:
        out.write('')    

      save_friends(args.f,cur_friends,)

I do not distribute any application keys or secretes. You’ll have to set those up yourself. Be aware you might be violating Facebook’s terms of service by using this script. You’ll want to run this script at regular intervals using a scheduler such as cron. Since it only displays output when there are changes to your friends, you can combine it with my Ruby E-mail Script to send you e-mail notifications on changes.

So you might be asking, why do I care if someone removes me as a friend on Facebook? Am I placing putting too much importance on my on-line life? Well honestly, I rarely use Facebook for anything other than Instant Message and promoting my own websites and projects. I just find it interesting what lengths Facebook would go to ensure users only experience their service in the way Facebook intended.

Facebook’s primary source of income is their ad revenue. Everything they do is carefully engineered to increase your interaction time. Don’t like a new interface change? It was probably put before a focus group to make sure you dislike it enough that you spend time trying to learn how to use it, but not so much you stop interacting with it entirely. Even your feed is filtered to avoid content from friends with opinions contrary to your own, in what Eli Pariser calls The Filter Bubble.

What bothers me is that Facebook would even request a take-down from Unfriendfinder. Is writing software that changes the way we interact with a website a violation of terms of service? How can it possibly infringe on intellectual property? If I built a custom web browser that added lots of additional context to the content of web pages, do the owners of those web pages have any right to demand I stop distributing my custom web browser? What does this imply about ad blocking software? What about augmented reality displays? Could a building owner sue for an advertisement a pair of glasses overlays on the building? If data is sent to your computer, whether it be audio, video or web pages, what software you chose to view and interpret that data should be up to you.

With the way technology is moving, I think that in the future, we’ll see more people move off closed private networks like Facebook, Google and Twitter and to more public and open platforms. We’ll see more open source solutions, with standard publishing interfaces, that will allow more control of what people post and share and more portability for moving that data between services.

Review: Cloud at Cost

$
0
0

Cloud at Cost Main Page Screenshot Cloud at Cost is a new company offering very low cost virtual servers. For their first 10,000 nodes, they’re offering a $35 one time setup fee to get a virtual server for life (of the company). There are additional $70 and $140 one time plans for slightly larger virtual servers, or the option for $1, $2 or $4 per month. I decided to give one of these virtual server plans a try. There were quite a few hickups. I wouldn’t say they’re production ready quite yet, but if you want a server to play around with, the price is right, and their services are certainly worth that price.

Their front website looks pretty well designed and it’s pretty obvious the cheap servers are a means for them to quickly raise a lot of initial capital. Once you get an actual account and login, the user interface is pretty bare-bones. Your root password is e-mailed to you in plain text. This wouldn’t be so bad as you can change it, but it’s also the password to get into the admin console for your VM and cannot be changed. It’s also in a disabled text input field so you can’t even select it to copy and paste it without modifying the website with Firebug.

I also noticed that if I rebooted my VM, it completely reset everything. At first I though it was actually erasing the VM, but it turns out only the configuration was being reset due to a provisioning script, /etc/rc3.d/S97-setup-run.sh, not being removed after installation. There was a notice on their support board indicating the problem. It had the wrong path listed (/etc/init.d/S97-setup-run.sh), but the support page did allow comments, so I posted a response correcting it.

Cloud at Cost Support Ticket Screenshot I assumed the company was Canadian as their rates were in Canadian Dollars, however the GeoIP lookup of my VM showed it to be in a Utah, disturbingly close to the infamous massive NSA data centre. I realize power for large endeavours in this region is much cheaper, which is most likely the reason for the location, but it’s still a little creepy. ( Update: Apparently I wasn’t using a good GeoIP service. According to the comments, a traceroute places the data centre in Canada) Other things I noticed were that there wasn’t a DNS interface, no IPv6 support and you don’t get a list of operating systems until you actually start to make your purchase. They have a very limited selection too: CentOS, Ubuntu LTS and Debian.

Their support also leaves a lot to be desired. I noticed the configuration issue and submitted a ticket. Although I discovered the solution on their support page and updated the ticket before they could respond, it’s still been several days and the ticket has not yet been closed. Also, their support ticket system does ask you for your VMs SSH user and password. Why would they even need this information? Other services like Linode simply offer support for making sure your VM environment is sound, but the operating system itself is always your responsibility. They shouldn’t even be asking for peoples’ passwords at all.

Using tespeed, I saw download speed peaking around 30MB/s and upload speeds averaging around 15MB/s, with a few results in the 25MB/s to 50MB/s range. The speed test also showed latency to be average around 70 ~ 90ms.

IP: 162.xx.xx.xx; Lat: 40.296800; Lon: -111.676100; ISP: Neighborhood ISP
Loading server list...
Looking for closest and best server...
Testing latency...
74 ms latency for http://speedtest.fiber.net/speedtest/ (Fibernet Corp, Orem, UT, United States) [1.98 km]
192 ms latency for http://vision.neighborhoodisp.com/speedtest/ (Neighborhood ISP, Orem, UT, United States) [1.98 km]
76 ms latency for http://speedtest1.veracitynetworks.com/speedtest/ (Veracity Networks, Provo, UT, United States) [2.74 km]
70 ms latency for http://speed1.sumofiber.com/speedtest/ (Sumo Fiber, Salt Lake City, UT, United States) [29.65 km]
91 ms latency for http://sto-utah-01.sys.comcast.net/speedtest/ (Comcast, Salt Lake City, UT, United States) [29.65 km]
Download size: 1.96 MiB; Downloaded in 0.27 s                                              
Download speed: 7.24 Mbit/s
Download size: 1.96 MiB; Downloaded in 0.32 s                                              
Download speed: 6.10 Mbit/s
Download size: 8.09 MiB; Downloaded in 0.48 s                                              
Download speed: 16.97 Mbit/s
Download size: 8.09 MiB; Downloaded in 0.48 s                                              
Download speed: 16.90 Mbit/s
Download size: 17.89 MiB; Downloaded in 0.61 s                                             
Download speed: 29.22 Mbit/s
Download size: 17.89 MiB; Downloaded in 0.71 s                                             
Download speed: 25.18 Mbit/s
Download size: 31.78 MiB; Downloaded in 0.87 s                                             
Download speed: 36.48 Mbit/s
Download size: 71.49 MiB; Downloaded in 2.64 s                                             
Download speed: 27.12 Mbit/s
Download size: 126.52 MiB; Downloaded in 2.60 s                                            
Download speed: 48.68 Mbit/s
Download size: 198.53 MiB; Downloaded in 4.34 s                                            
Download speed: 45.75 Mbit/s
Download size: 285.07 MiB; Downloaded in 11.95 s                                           
Download speed: 23.86 Mbit/s
Upload size: 2.10 MiB; Uploaded in 0.40 s                                                  
Upload speed: 5.23 Mbit/s
Upload size: 2.10 MiB; Uploaded in 0.41 s                                                  
Upload speed: 5.08 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.85 s                                                  
Upload speed: 9.93 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.85 s                                                  
Upload speed: 9.82 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.33 s                                                 
Upload speed: 12.60 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.31 s                                                 
Upload speed: 12.81 Mbit/s
Upload size: 33.55 MiB; Uploaded in 3.06 s                                                 
Upload speed: 10.97 Mbit/s
Upload size: 50.33 MiB; Uploaded in 4.70 s                                                 
Upload speed: 10.70 Mbit/s
Upload size: 50.33 MiB; Uploaded in 4.69 s                                                 
Upload speed: 10.73 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.16 s                                                 
Upload speed: 10.89 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.70 s                                                 
Upload speed: 17.99 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.91 s                                                 
Upload speed: 13.81 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.33 s                                                 
Upload speed: 9.44 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.93 s                                                 
Upload speed: 13.49 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.89 s                                                 
Upload speed: 14.21 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.84 s                                                 
Upload speed: 14.98 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.84 s                                                 
Upload speed: 15.04 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.85 s                                                 
Upload speed: 14.86 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.91 s                                                 
Upload speed: 13.75 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.86 s                                                 
Upload speed: 14.63 Mbit/s
Upload size: 25.17 MiB; Uploaded in 1.94 s                                                 
Upload speed: 12.98 Mbit/s
Upload size: 25.17 MiB; Uploaded in 1.32 s                                                 
Upload speed: 19.02 Mbit/s
Upload size: 25.17 MiB; Uploaded in 1.12 s                                                 
Upload speed: 22.45 Mbit/s
Upload size: 25.17 MiB; Uploaded in 2.05 s                                                 
Upload speed: 12.26 Mbit/s
Upload size: 25.17 MiB; Uploaded in 1.45 s                                                 
Upload speed: 17.41 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.18 s                                                 
Upload speed: 14.23 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.83 s                                                 
Upload speed: 9.18 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.77 s                                                 
Upload speed: 9.48 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.21 s                                                 
Upload speed: 13.87 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.46 s                                                 
Upload speed: 11.46 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.33 s                                                 
Upload speed: 12.63 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.11 s                                                 
Upload speed: 15.15 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.07 s                                                 
Upload speed: 15.63 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.00 s                                                 
Upload speed: 16.81 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.15 s                                                 
Upload speed: 14.59 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.72 s                                                 
Upload speed: 9.75 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.35 s                                                 
Upload speed: 12.40 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.64 s                                                 
Upload speed: 10.26 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.23 s                                                 
Upload speed: 13.61 Mbit/s
Upload size: 16.78 MiB; Uploaded in 1.07 s                                                 
Upload speed: 15.64 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.64 s                                                  
Upload speed: 13.02 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.65 s                                                  
Upload speed: 12.92 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.75 s                                                  
Upload speed: 11.12 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.80 s                                                  
Upload speed: 10.48 Mbit/s
Upload size: 8.39 MiB; Uploaded in 0.79 s                                                  
Upload speed: 10.59 Mbit/s
Upload size: 6.29 MiB; Uploaded in 0.75 s                                                  
Upload speed: 8.43 Mbit/s
Upload size: 6.29 MiB; Uploaded in 0.76 s                                                  
Upload speed: 8.28 Mbit/s
Upload size: 6.29 MiB; Uploaded in 0.75 s                                                  
Upload speed: 8.38 Mbit/s
Upload size: 6.29 MiB; Uploaded in 0.75 s                                                  
Upload speed: 8.39 Mbit/s
Upload size: 6.29 MiB; Uploaded in 0.79 s                                                  
Upload speed: 8.01 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.09 s                                                 
Upload speed: 11.51 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.22 s                                                 
Upload speed: 10.35 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.07 s                                                 
Upload speed: 11.74 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.01 s                                                 
Upload speed: 12.42 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.02 s                                                 
Upload speed: 12.40 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.17 s                                                 
Upload speed: 10.76 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.02 s                                                 
Upload speed: 12.33 Mbit/s
Upload size: 12.58 MiB; Uploaded in 0.96 s                                                 
Upload speed: 13.07 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.37 s                                                 
Upload speed: 9.17 Mbit/s
Upload size: 12.58 MiB; Uploaded in 1.11 s                                                 
Upload speed: 11.33 Mbit/s
Upload size: 50.33 MiB; Uploaded in 3.74 s                                                 
Upload speed: 13.46 Mbit/s
Upload size: 50.33 MiB; Uploaded in 4.35 s                                                 
Upload speed: 11.58 Mbit/s
Upload size: 50.33 MiB; Uploaded in 4.41 s                                                 
Upload speed: 11.40 Mbit/s
Upload size: 50.33 MiB; Uploaded in 3.99 s                                                 
Upload speed: 12.62 Mbit/s
Upload size: 50.33 MiB; Uploaded in 3.74 s                                                 
Upload speed: 13.46 Mbit/s
Upload size: 33.55 MiB; Uploaded in 4.19 s                                                 
Upload speed: 8.00 Mbit/s
Upload size: 33.55 MiB; Uploaded in 3.49 s                                                 
Upload speed: 9.62 Mbit/s
Upload size: 33.55 MiB; Uploaded in 4.30 s                                                 
Upload speed: 7.81 Mbit/s
Upload size: 33.55 MiB; Uploaded in 3.34 s                                                 
Upload speed: 10.06 Mbit/s
Upload size: 33.55 MiB; Uploaded in 2.97 s                                                 
Upload speed: 11.28 Mbit/s
Upload size: 33.55 MiB; Uploaded in 4.73 s                                                 
Upload speed: 7.09 Mbit/s
Upload size: 33.55 MiB; Uploaded in 3.43 s                                                 
Upload speed: 9.79 Mbit/s
Upload size: 33.55 MiB; Uploaded in 4.19 s                                                 
Upload speed: 8.00 Mbit/s
Upload size: 33.55 MiB; Uploaded in 3.64 s                                                 
Upload speed: 9.23 Mbit/s
Upload size: 33.55 MiB; Uploaded in 5.93 s                                                 
Upload speed: 5.66 Mbit/s

As far as the one time prices go, $35 for life seems like a good deal until you realize that at the monthly rate of $1, it would take 2.9 years for you to actually save any money. It’s the exact same with the $70 one-time vs $2 monthly for the second tier and the $140 vs $4 for the third tier. If you select the one-time price, you’re basically betting that the company will survive over three years and that you’d still be using them after that time.

So, just to recap, here are a list of issues I have with CloudatCost:

  • No IPv6 Support
  • No DNS Services
  • The company charged you in Canadian Dollars, yet the GeoIP of the data centre is in Utah, USA ( See comments below )
  • Limited Operating System Support (CentOS, Ubuntu LTS and Debian)
  • Supported distributions listed nowhere on the site until you go to purchase section
  • Configuration wiped on reboot (fixed)
  • Very slow support response times
  • Monthly plans are cheaper than one-time plans unless the company lasts longer than 2.9 years

The positive side:

  • Hosting plans are cheap
  • Download and Upload rates seem to top out around 30MB/s ~ 50MB/s

Overall, it’s a service that seems right for the price. Most services bank on people utilizing nowhere near all of their allocated VM space. Many will over provision nodes and keep the low-usage/idle VMs packed onto one cluster of servers and migrate other more highly utilized VMs onto more powerful servers. I suspect Cloud at Cost is banking on this underutilization for their initial influx of capital.

Cloud at Cost is a bare bones VM playground that is the right price for developers and those that like to tinker. It’s be a great landing zone for small scripts, scheduled tasks and development servers. However, for anything production grade other than basic monitoring services, I’d suggest using something a bit more reliable such as Linode.


Exporting Wordpress Comments to Jekyll

$
0
0

Do you have an old Wordpress blog? Are you migrating it to Jekyll? Tired of spam? Don’t want to moderate comments anymore, but you don’t want to lose all those comments from people who have contributed so far? The Jekyll-oldcomments gem is for you!

I recently migrated most of my websites off of Wordpress to Jekyll. There are ways to add commenting to Jekyll generated sites, such as running Isso, but I decided not to go that route. Don’t get me wrong, I like the contributions I’ve received. I do quite a bit of both commenting and commenting reading. I just no longer wanted to deal with the hassle that comes with comment moderation and spam removal.

I created a script to extract comments from my old Wordpress blogs, as well as a Jekyll plug-in to statically display those comments. The Jekyll-olcomments instructions and source code can be found on Github.

Removing Footnotes from Excerpts in Jekyll

$
0
0

I’ve been writing a lot of Jekyll related posted lately, because I’ve recently switched all my websites over to Jekyll from Wordpress. My latest challenge has involved footnotes in excerpts. Kramdown, the default Markdown implementation in Jekyll, supports footnotes in Markdown, but they unfortunately show up when using post.excerpt inside Liquid templates. The following is a plug-in I wrote to strip footnotes, as well as the superscript links leading to them, for use on templates with post previews, such as an index page or RSS feed.

When dealing with posts in Jekyll, post.excerpt can be used within post iterations to display a preview of the post. The excerpt_separator can be set in the _config.yml to indicate where the excerpt ends. The following example shows an index page where the paginator is used to list posts and their excerpts, as well as a “Read More” link to continue on to the rest of the article.

{% for post in paginator.posts %}
  <section class="blogroll">
    <header>
      <h2>
        <a class="post-link" href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a>
      </h2>
      <span class="blog-post-meta">{{ post.date | date: "%b %-d, %Y" }}</span>
    </header>
    <article>
      {{ post.excerpt }}
      <a class="button small right round" href="{{ post.url | prepend: site.baseurl }}">
        Read More <span class="fa fa-chevron-right"></span>
      </a>
    </article>
  </section>
{% endfor %}

The trouble with this is that if you have footnotes in the opening excerpt of your post, Kramdown will add them to the preview itself. To deal with this, we’ll need to add a Jekyll plug-in for filtering out the added footnote div. This will require the nokogiri gem for HTML parsing.

In the following code block, nokogiri is used to read in an HTML fragment. Kramdown places all the footnotes in a div with the class set to footnotes. It also gives all the links to the footnotes a class as well. The for loop iterates through these items and removes block elements with the appropriate classes. Since I exported my old website from Wordpress, I’m also deleting the sup blocks as well which were generated from the Textile processor I was using in Wordpress.

After saving the above file in _plugins/stripfootnotes.rb, the code above can be modified to include the filter like so:

...
  {{ post.excerpt | strip_footnotes }}
...

In the past I’ve turned little plugin-ins like this one into their own gems such as jekyll-oldcomments or jekyll-unsanitize. Since this particular plug-in is very specialized, I’ve simply places the source code in a Github Gist, free of license, for anyone to use and modify.

How Google and Microsoft made E-mail Unreliable

$
0
0
E-Mail Icon

E-mail is completely broken and unreliable thanks to big players like Google, Microsoft and Facebook. Shortly after the NSA spying revelations, I decided to move off of Gmail and back onto my own e-mail server. It wasn’t for privacy, as e-mail is often transmitted plain-text and has no more security than a postcard, but just a general desire to distance myself from Google services. I had run an e-mail server in the past using Postfix and Courier-IMAP back around 2005 (as well as Amavis-new, Spamassassin and ClamAV for spam and viruses). When I attempted to setup an e-mail server in 2013, the stack was pretty much identical except Dovecot now replaces Courier and additional tools such as DKIM, DMARC and SPF are now necessary for outgoing e-mail validation. However the largest challenge I faced wasn’t from my own technology stack, but with my e-mails becoming unreliable against both Google’s and Microsoft’s over-aggressive spam filters.

Google, Facebook and Closed Communication

Google has always been at ends with Facebook as they began to seriously compete with each other over communication services. In 2010, Google blocked Facebook from importing Gmail contacts to build an initial friend community in preparation for the launch of Google+ in 20111. Since that time the two networks have always been separate, with no ability to import contacts or friends from one to the other.

Google’s primary communication system was built on top of e-mail, an open, federated, and standard communication system. E-mail allows anyone to setup a point of communication on their own domain. The word federated in this sense means that independent systems are allowed to communicate by means of standard addresses. Telephone systems are somewhat federated in the sense that many providers communicate with each other using a standard addressing system based on phone numbers and international calling codes. Postal e-mail is an analogue type of federation as each country can establish a post office and send items to each other using a standardized address. Although each system can have its own internal structure, implementation, sorting routines and technology, there is an agreed upon set of standards for communicating with others in the same domain.

When it comes to e-mail, messages from one provider to another are sent via Simple Mail Transport Protocol (SMTP). Facebook tried to integrate e-mail into its own messaging service, giving all users their own @facebook.com e-mail address based on their username. Facebook also silently replaced everyone’s public e-mail address on their profile with an @facebook.com address, forcing people further into their close communication system. The service, which started in November of 2012, was plagued with problems and was rarely used. Eventually, Facebook shut down the service in early 20142.

I really hated using Facebook’s messaging system. Facebook offered an XMPP interface, another open federated standard for sending Instant Messages, however the reliability of their implementation was atrocious. Many of my messages simply failed to send with no notification of failure. I’d often have to login to Facebook to ensure my messages were actually getting through. Even reliability within the web interface was inferior to every other proprietary Instant Messenger at the time including AOL Instant Messenger (AIM), Yahoo Messenger and Windows Live Messenger (MSN). Although it is considerably more reliable today, it took nearly a decade for it to catch up with its counterparts.

Eventually, both of these giants would abandon most standardized federated protocols. Google dropped support for federated XMPP in GTalk (now Google Hangouts) in 20133. Prior to this, people could communicate with contacts on Google’s GTalk service from their own servers. Google silently removed this feature before publicly announcing it. XMPP could still be used with Google Hangouts for person-to-person IMs within their service, but group chats and video chats are now only available using their proprietary Hangouts application.

Facebook never had federated XMPP support, but even their basic XMPP interface was shutdown in early 20154. This forces users into only being able to use Facebook’s web interface or mobile app. Without a replacement API, developers who want to integrate 3rd party applications with Facebook’s messaging service must now reverse engineer their proprietary protocol.

Overaggressive Spam Filters

In 2007, Google purchased Postini, a company specializing in spam filtering software5. At the time, I worked for a company that used Postini internally and it worked fairly well. In 2012 I was complaining to a friend about how I didn’t like Gmail’s user interface. Defending Gmail as a service, he made the point that Google’s spam filters were ahead of other services, preventing any spam from getting through. Later I would learn this really isn’t an advantage. Not only does Gmail’s spam filter prevent spam from reaching its users, but it also blocks an incredible amount of non-spam e-mails.

Earlier this year I moved my personal email from Google Apps to a self-hosted server, with hopes of launching a paid mail service à la Fastmail on the same infrastructure. I’ve done this before, and this server was configured perfectly: not on any blacklists, reverse DNS set up, SPF, DKIM and DMARC policies in place, etcetera…I had no issues sending to other servers running Postfix or Exim; SpamAssassin happily gave me a 0.0 score, but most big services and corporate mail servers were rejecting my mail, or flagging it as spam: Outlook.com accepted my email, but discarded it. GMail flagged me as spam…” -The Hostile Email Landscape. Jody Ribton6.

I’ve often run into Ribton’s issues as well. Even prior to leaving Gmail, I had e-mail I’d send to friends that would end up in their spam folder. Even internally, their spam filter is horribly over-aggressive. I still use my university e-mail accounts, outsourced to Google and Microsoft, for sending e-mail to schools and professors. In my own testing, e-mail from those accounts tends to get flagged as spam, especially if I include PDF attachments.

Microsoft’s fares no better. A few months ago, I sent an e-mail to a friend who I e-mail several times a year. Out of nowhere I receive the following response:

This is the mail system at host **removed**.

I'm sorry to have to inform you that your message could not
be delivered to one or more recipients. It's attached below.

For further assistance, please send mail to postmaster.

If you do so, please include this problem report. You can
delete your own text from the attached returned message.

The mail system

<**removed**@hotmail.com>: host mx3.hotmail.com[65.54.188.72] said: 550 SC-001
(BAY004-MC1F57) Unfortunately, messages from x.x.x.x weren't sent.
Please contact your Internet service provider since part of their network
is on our block list. You can also refer your provider to
http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL
FROM command)

I contacted my ISP to see if there were any issues with spammers on the subnet my server was hosted on, or if they had any network operations specialists with communication channels with Microsoft. They said they were unable to communicate with Microsoft about IP blacklists, and the only solution they had was to assign me a different IP address. I took it upon myself to file a problem with Microsoft, which resulted in the following response:

Conditionally mitigated
x.x.x.x/32
Our investigation has determined that the above IP(s) qualify for conditional mitigation. These IP(s) have been unblocked, but may be subject to low daily email limits until they have established a good reputation.

Please note that mitigating this issue does not guarantee that your email will be delivered to a user’s inbox.

Ongoing complaints from users will result in removal of the mitigation.

Mitigation may take 24 - 48 hours to replicate completely throughout our system.

If you feel your issue is not yet resolved, please reply to this email and one of our support team members will contact you for further investigation.

Following this, I attempted to resend the e-mail, which resulted in getting the same response again. I verified my servers were fully patched and checked my logs to ensure no one had found an exploit to use my server to send spam. I came up with nothing. Eventually I broke out of this loop and my e-mail was delivered. At least in this instance, I got a notice. Typically e-mail is dropped without any indication to the sender.

From my own e-mail server, even if I send an e-mail with no links, images or profanity, it will still end up in the receiver’s spam folder or get discarded silently. SPF, DKIM and DMARC are all domain verification systems for validating e-mail’s origin to prevent spam. I have all three records set in DNS records for all the domains I send e-mail from, verified they were correct using testing tools, and I still get flagged as spam.

I almost always must use a second channel of communication such as Facebook, Google Hangouts, SMS/Text or even Reddit messages, telling the receiver to check his or her spam folder. Sure enough, once they do and mark that message as not being spam, subsequent messages get through fine.

The Connection Problem

Facebook has taken an entirely different approach to spam prevention and messaging. If we look back at the MySpace days, one of the features Facebook supported was two-way confirmation process for friends. People have control over which individuals decide to add them, unlike other services such as Twitter. This allows Facebook to build a network, and use the links between individuals to determine the potential for a message to be spam.

Messages sent to an individual outside of close links (friends, friends of friends, and so fourth) would often go to a folder marked other, more recently renamed to filtered messages. At one time, Facebook even attempted charging people a fee to bypass the filter to prevent spam. The fee varied and was most likely based on an internal/proprietary algorithm, with Facebook founder Mark Zuckerberg having a fee of $100 USD for sending a message to him7.

Marking a message as not-spam is essentially making the same type of two-way approval for communication. The major problem is that people don’t often check their spam folders, which can be filled with thousands of messages at any one time.

Google attempts to build a similar hierarchy/friend network with their Google+/Hangouts services. Recently Google integrated Gmail into this system, allowing people to send e-mail to people they were connected to on Google+/Hangouts without knowing their e-mail address8. These messages aren’t really e-mail, but they appear alongside other messages in Gmail, further pushing communication into a closed-wall system that only works through proprietary, non-federated, commercial systems.

Decline of E-mail

In 2009, a Nielsen survey found that people used social networking far more than they used e-mail9. Many people today only use e-mail to sign up for other services. It becomes a bucket of notifications that are never checked. The inbox has turned into the spam folder and Google’s attempts of adding priority e-mail and automatic sorting seem to have come too little and too late.

Piled Higher and Deeper - by Jorge Cham www.phdcomics.com
Piled Higher and Deeper - by Jorge Cham www.phdcomics.com

The simple fact is that today, e-mail has become completely unreliable. A letter sent through the post office is more likely to get to the intended recipient than an e-mail sent to someone who doesn’t have you listed as a contact. Facebook and Google’s war over market share of the Internet have caused people to flock to their services as primary communication mechanism.

In November 2015, Facebook began blocking all communication mentioning the new social networking service Tsu. One of Tsu’s selling points seems to be a means to share advertising revenue with users of their service. Facebook removed all posts with links to the site and even news posts commenting on post removal10. It is possible that Tsu was spamming Facebook, or that the massive interest by people triggered automated spam processes, however it’s also likely the blocking was intentional. Just as in the Google and Facebook war, connection maps of individuals are an important asset. When one for-profit company controls the communication medium, they set the rules and can easily stamp out competitors to their monopoly, in the name of spam prevention.

E-mail was once the pillar of the Internet as a truly distributed, standards-based and non-centralized means to communication with people across the planet. Today, an increasing number of services people rely on are losing federation and interoperability by companies who need to keep people engaged on their for-profit services. Much of the Internet’s communication is moving to these walled gardens, leaving those who want to run their own services in an increasingly hostile communication landscape.

  1. Google blocks Facebook from importing GMail contacts in preparation for Google Me launch 8 November 2010. Brownlee. Geek.

  2. Facebook Retires its Email Service. 24 February 2014. Hamburger. The Verge.

  3. No, it’s not the end of XMPP for Google Talk. 2 March 2015. Fippo. XMPP Standards Foundation.

  4. Facebook Chat Will Stop Working in Ubuntu This Week. 20 April 2015. Sneddon. OMG Ubuntu.

  5. Google to Acquire Postini. 9 July 2007. Google Press Release.

  6. The Hostile Email Landscape. 17 October 2015. Liminality. Ribton.

  7. Wah? Facebook Wants You to Pay $100 to Message Zuckerberg. 11 Jan 2013. Thompson. CNBC.

  8. Any Google+ User Can Now Email You Without Your Address. 10 Jan 2014. Wagner. Mashable.

  9. Social networking and blogs now more popular than email, says Nielsen . 9 March 2009. Schofield. The Guardian.

  10. Facebook Is Blocking an Upstart Social Network Should We Be Worried?. 12 November 2015. Finley. Slate.

Jekyll 3 and Foundation 6

$
0
0
Jekyll + Foundation
Jekyll + Foundation

Jekyll is a powerful static website generator geared towards programmers or those with technical backgrounds. It also has support for automatically building SCSS. I wanted to use Jekyll with the Foundation CSS framework, but I found that most of the tutorials available were out of date, used older versions of Jekyll or Foundation, or recommended forking an existing Github repository. The following tutorial goes through the process of adding Foundation to a Jekyll website and having Jekyll automatically build all the necessary assets. It uses Jekyll 3.0.1 and Foundation 6, and may need to be adjusted for future versions. This tutorial assumes that Jekyll is already installed. If you haven’t already installed Jekyll, follow the instructions on Jekyll’s official website. The first thing we need to do for this tutorial is create a basic Jekyll site. This is as simple as typing the following:

    jekyll new jekyll-foundation

This will create a new directory with a basic Jekyll website. The important directories we need to be concerned with are _sass and css. We will also need to make a js directory for Foundation JavaScript.

At the time of this writing, I used foundation-6. You can get the source by running npm install foundation-sites. The directory structure looks like the following:

foundation-sites
├── dist
│   ├── foundation.css
│   ├── foundation.js
│   ├── foundation.min.css
│   └── foundation.min.js
├── foundation-sites.scss
├── js
│   ├── foundation.abide.js
│   ├── foundation.accordion.js
│   ├── foundation.accordionMenu.js
│   ├── foundation.core.js
│   └── ...
├── LICENSE
├── package.json
├── README.md
├── scss
│   ├── components
│   ├── forms
│   ├── foundation.scss
│   ├── _global.scss
│   ├── grid
│   ├── settings
│   ├── typography
│   ├── util
│   └── vendor
└── test
    ├── javascript
    └── sass

We’re going to use foundation’s minified foundation.min.js, however we won’t be using the minified css file. Since Jekyll has SCSS support, we can use the SCSS source files themselves. When using foundation with SCSS, it’s also possible to assign HTML5 elements directly to the Foundation grid, avoiding all the messy large, medium, small, row and column classes.

By default Jekyll comes with some basic stylesheets and layouts. We’re going to start by removing all of them. Delete _base.scss and _layout.scss from the _sass directory in your Jekyll project. If you don’t intend on using Jekyll’s syntax highlighting, _syntax-highlighting.scss can be removed as well. Finally, in the css directory, remove the main.scss.

Now we need to copy all our Foundation assets into the Jekyll project. Remember to create the js directory. Here’s a list of what goes where:

foundation-sites/dist/foundation.min.js  ->  jekyll-foundation/js/foundation.min.js
foundation-sites/scss/* -> jekyll-foundation/_sass/*
foundation-sites/foundation.scss -> jekyll-foundation/css/main.scss

We’ll need to modify css/main.scss so that it includes YAML front-matter. Without this, Jekyll will process that file as if it were static content (such as an image). The Jekyll front-matter, even if empty, will make Jekyll run the file thought its SASS processor.

---
---
@import 'foundation';
@include foundation-everything;

We can also add the following to our _config.yml to explicitly tell Jekyll to compress/minify the generated css files:

 ...
 sass:
   sass_dir: _sass
   style: compressed
 ...

Next we’re going to modify the base layouts a bit so we can use them with HTML5 and SCSS mixins later. We’ll start with the following rather simple change to _layouts/default.html

<!DOCTYPE html>
<html>
  {% include head.html %}
  <body>
    
    {% include header.html %}
    
    <main>
      {{ content }}
    </main>
    
    {% include footer.html %}
  
  </body>
</html>

We remove the page-content and wrapperdiv tags and replace them with a main tag. We’re going to leave the other layouts and includes alone for now, but of course you will most likely need to modify these according to your needs.

Next let’s modify the header.html to include a simple Foundation 6 navigation bar. The following is taken directly from the Foundation example documentation, except that we’re using an HTML5 nav element instead of a div.

<header>
  <nav class="top-bar">
    <div class="top-bar-left">
      <ul class="dropdown menu" data-dropdown-menu>
        <li class="menu-text">Site Title</li>
        <li class="has-submenu">
          <a href="#">One</a>
          <ul class="submenu menu vertical" data-submenu>
            <li><a href="#">One</a></li>
            <li><a href="#">Two</a></li>
            <li><a href="#">Three</a></li>
          </ul>
        </li>
        <li><a href="#">Two</a></li>
        <li><a href="#">Three</a></li>
      </ul>
    </div>
    <div class="top-bar-right">
      <ul class="menu">
        <li><input type="search" placeholder="Search"></li>
        <li><button type="button" class="button">Search</button></li>
      </ul>
    </div>
  </nav>
</header>

This particular navigation menu has a submenu, which requires the Foundation JavaScript to be loaded. Foundation’s JavaScript is dependent on jQuery, so we must first download jQuery and add the appropriate reference in the head.html file. In the following example, we use a local jQuery 2.1.4 minimized, but you may prefer to pull it from a CDN.

...
<script type="text/javascript" src="/js/jquery-2.1.4.min.js"></script>
... 

Finally, we modify (and simplify) the footer.html. The Foundation documentation recommends placing this code at the bottom of the page near the closing body tag, so that is what I’ve done in the following example. It may be cleaner to place this in its own file and wrap the foundation() call in a jQuery $(document).ready() callback, however I haven’t tested to see if that works correctly. The choice is up to you.

<footer class="site-footer">

  <h2 class="footer-heading">{{ site.title }}</h2>
  <p>{{ site.title }}</p>
  <p>{{ site.description }}</p>

  <script src="/js/foundation.min.js"></script>
  <script>
    $(document).foundation();
  </script>

</footer>

Now let’s take a look at that main.scss file again. We’re going to use mixins to assign our elements to parts of the Foundation grid.

---
---

$primary-color: green;
$secondary-color: blue;
$dropdownmenu-background: orange;

@import 'foundation';
@include foundation-everything;

main {
  @include grid-row;
  article {
    header, div:first { 
      @include grid-column(12);
    }
  }
  padding : {
    left: 20px;
    right: 20px;
  }
}

footer {
  padding: 20px;
  background-color: black;
  color: white;
}

Here we define our main tag to be a Foundation grid-row. We also set the header and first div under article to be a size 12 grid column. There is some other CSS for the footer and at the top, we’ve defined some internal Foundation variables to change the defaults for our website. The colours are pretty hideous, but they’re just provided as an example. The padding is needed so the text doesn’t hit the edge of the layout when in mobile (small/medium) mode.

I’ve created a new post with some lorem ipsum text. Using the changes made so far, the result should look like the following:

Lorem Ipsum Post in Jekyll + Foundation Environment
Lorem Ipsum Post in Jekyll + Foundation Environment

So that concludes our basic tutorial of using Foundation 6 with Jekyll 3. Obviously there is a lot more customization than can be done here. Also, we included all the JavaScript components of Foundation using the minified foundation.min.js, but the Foundation source does contain each individual JavaScript file. There are ways to combine and minify JavaScript with Jekyll. By individually selecting JavaScript files and editing _sass/foundation.scss, it is possible to include only the Foundation components that you intend to use, minimizing download times and the overall footprint of your website. All of that is out of the scope of this simple tutorial, but more information can be found around the web and in the official documentation for each respective project. The source for our jekyll-foundation project can be found in the jekyll-foundation Github repository.

Running a LG 31MU97 on Linux at 4096x2160 at 60Hz

$
0
0
LG 31MU97C-B 4k monitor
LG 31MU97C-B 4k monitor

Using relatively new hardware on Linux systems can prove to be challenging. Last year, I ran into several challenges when I decided to use an MSI Gaming laptop as a development machine while I was backpacking around the world. Now that I’m in one place again, I’ve run into similar challenges when trying to get my LG 31MU97C-B 4k monitor working at its optimal resolution in Linux. The following guide shows the modelines that must be added via the xrandr command in order to have this monitor function at 4096x2160 at 60Hz.

The 4k/UHD standard for video is a bit confusing. Typically, an Ultra High Definition (UHD) TV or computer monitor has a native resolution of 3840x2160 pixels. These are often misbranded as 4k displays, even though true 4k is double the height/width of a standard 1080p display, coming to 4096x2160 pixels. Most displays cheap out on these extra pixels since, due to aspect ratios used in TV and cinema, the 3840x2160 resolution allows for most content to either fit within the viewable area or scale evenly without adding black bars.

Viewing content at 4096x2160 at a refresh rate higher than 30Hz requires either HDMI2 or Display Port 1.2. HDMI2 still hasn’t found its way to a lot of consumer video cards (more common on nVidia cards than AMD) and the LG 31MU97 doesn’t support HDMI2 anyway. I’m using an ATI/AMD Radeon 7800, which supports Display Port 1.2. I’m also using the open source radeon drivers. When running the xrandr command, the full 4096x2160 mode isn’t even displayed:

Screen 0: minimum 320 x 200, current 3840 x 2160, maximum 16384 x 16384
DisplayPort-0 disconnected (normal left inverted right x axis y axis)
DisplayPort-1 connected 3840x2160+0+0 (normal left inverted right x axis y axis) 621mm x 341mm
   3840x2160     60.00*+  30.00  
   1920x1080     60.00    59.94  
   1600x900      60.00  
   1280x1024     60.02  
   1152x864      59.97  
   1280x720      60.00    59.94  
   1024x768      60.00  
   800x600       60.32  
   720x480       60.00    59.94  
   640x480       60.00    59.94  
HDMI-0 disconnected (normal left inverted right x axis y axis)
DVI-0 disconnected (normal left inverted right x axis y axis)
DVI-1 disconnected (normal left inverted right x axis y axis)

I found some forum posts that used commands such as cvt or gtf in order to create the correct modeline for this resolution. Unfortunately, I couldn’t get any modelines generated by those tools to work correctly. I finally found a forum post by lordmocha that explained that the closed source AMD/ATI drivers did establish the correct modelines for this particular monitor. Using the information from the post1, I was able to create the correct modeline using xrandr.

xrandr --newmode "4096x2160_60" 556.730  4096 4104 4136 4176  2160 2208 2216 2222 +hsync +vsync
xrandr --newmode "4096x2160_50" 526.170  4096 4632 4696 4736  2160 2208 2216 2222 +hsync +vsync

In the above example, I added a mode for both 60Hz and a fall-back to 50Hz, with both horizontal and vertical sync enabled. I’ve also tested the same modes with both hsync and vsync disabled and they work as well (although you’ll get a warning pop-up on the monitor’s on screen display). You can create additional modes with different names (e.g. "4096x2160_60nosync") if you want to switch between turning synchronization on or off on the fly. If you play games on Linux, disabling the vertical sync can sometimes help if you experience input lag issues, at the expense in increased screen tearing.

Next, the modes need to be added to the video card’s Display Port which can be done with the following commands. These may need to be adjusted depending on the port the monitor is connected to.

xrandr --addmode DisplayPort-1 4096x2160_50
xrandr --addmode DisplayPort-1 4096x2160_60

If you have another display connected, you can easily test this new mode without risk of locking yourself out of your machine by running your xrandr commands from the second screen. If you don’t have a second display and you end up with a blank screen on the mode switch, you can remotely connect from another machine via SSH to restart the X server, or possible switch to a console virtual terminal (typically Ctrl+Alt+F1). You can also use the following command to attempt the mode switch, wait fifteen seconds, and then switch back.

xrandr -s "4096x2160_60"; sleep 15; xrandr --output DisplayPort-1 --mode 3840x2160 --auto

If everything goes smoothly, xrandr should now indicate the display is running using the custom mode at a 60Hz refresh rate.

Screen 0: minimum 320 x 200, current 4096 x 2160, maximum 16384 x 16384
DisplayPort-0 disconnected (normal left inverted right x axis y axis)
DisplayPort-1 connected 4096x2160+0+0 (normal left inverted right x axis y axis) 621mm x 341mm
   3840x2160     60.00 +  30.00  
   1920x1080     60.00    59.94  
   1600x900      60.00  
   1280x1024     60.02  
   1152x864      59.97  
   1280x720      60.00    59.94  
   1024x768      60.00  
   800x600       60.32  
   720x480       60.00    59.94  
   640x480       60.00    59.94  
   4096x2160_50  50.00  
   4096x2160_60  60.00*  
HDMI-0 disconnected (normal left inverted right x axis y axis)
DVI-0 disconnected (normal left inverted right x axis y axis)
DVI-1 disconnected (normal left inverted right x axis y axis)

I use the i3 window manager. To persist these changes across reboots, I add the above commands to an initialization script that i3 runs when it starts. Most desktop environments offer a way to run commands at startup, and you should simply be able to put the above commands in a script and add it to the list of startup items.

You can also add the modes to /etc/X11/xorg.conf. Because Xorg is pretty good at auto detecting hardware these days, most Linux distributions don’t even create an xorg.conf file. It can be added manually in the case where automatic configuration doesn’t work. By adding the correct modes directly to the Xorg configuration, the graphical display will be at 4096x2160, even at the login screen. I didn’t bother going this route myself, but there’s plenty of existing documentation on the xorg.conf that can be searched for.

Being an early adopter for technology in the Linux world can lend itself to some headaches. It probably doesn’t help that I use Gentoo, a distribution that’s geared more towards developers and people in the tech industry. (If I had used Ubuntu with the official ATI/AMD drivers, I make have been able to run at the correct resolution “out of the box”.) Still, even though it takes me a little longer to get some things running, I gaining an understanding of how the underlying technology works. Contributing back to the open source community also means that in the future, other people will be able to plug in peripherals and they simply “just work.”

  1. LG 31MU97 - Page 7 - HardForum. 13 November 2015. lordmocha. HardForum.

Viewing all 43 articles
Browse latest View live