On the value of money-as-currency

I first wrote this as a comment to a Veritasium video that analyzes the perils of a recommendation-based approach to content distribution. In attempting to link to that comment on Twitter, I found myself unable to do so because Twitter or Youtube or both don’t seem to give a shit about preserving user intent. Well, fuck you both, this is my website, and that comment is hosted below.

In the 1990s, before Internet use exploded, there were no large-scale Internet content distributors that value ‘engagement’ above all else, because there weren’t enough users to sustain an advertising-based business model (a state that I hold in much higher regard now than I did back then). There was still plenty of worthwhile stuff to find, but you had to know how to find it. It’s not hard to do, but it does require some effort: typing into a search field, bookmarking things, making lists, setting reminders… and the hardest one of all (then and now): remembering to search in the first place. All of these are critical skills that need to be USED to get the most out of the Internet. Just like muscles, without use you get atrophy.

As the Internet gained users, it became increasingly crowded with corporations, which largely fueled that expansion. In the same way that YouTube and pretty much every commercial web presence wants to increase engagement (because eyeballs are currency if you’re counting advertising impressions), corporations in general tend to value growth and shareholder value over everything else. They figured out a long time ago that reducing friction usually yields more customers. That’s not always a bad thing; for example a hand tool that’s easier to use because it has a molded hand grip is just more effective than one without a hand grip. Ease-of-use is an important part of the overall cost / benefit analysis, for both consumers and producers.

The trick is that the cost / benefit continuum extends in both directions. It is optimistic (but naive) to think about that continuum in one direction: “people are probably willing to sustain a higher cost to receive a greater benefit” while not also thinking of the other direction where people are willing to pay a lower cost to receive less benefit. Sure, lots of people have the ability to exclusively watch movies or shows that they select themselves (e.g. DVDs), but a huge number of people would rather watch something they didn’t select if it means they don’t have to work for it. It is almost certain that they won’t be watching precisely what they want to watch all the time, but hey – just think of all that effort saved! “I make choices constantly at work (school, whatever), and now at the end of a long day, I just want to relax and be spoon-fed!”.

Probably the most familiar example of how that trade-off plays out is called ‘television’, and it sucks (IMO) for the very reason that allowed media corporations to grow their empires: they traveled too far down the stupid side of the cost / benefit continuum and destroyed their own ecosystem in the process, while also ‘earning’ absurd amounts of money. There’s a very blurry line between making products and services more accessible in ways that empower people (“molded hand grip”) versus accidentally or on purpose participating in a race to the bottom that yields local, short term gains (“more viewers for ME!”) at the cost of negative global or long-term effects (“I just reinforced mindless consumer behaviors!”). Historical note: TV wasn’t always like that. In the early days when there were only a handful of networks, there was some consideration for the greater good that factored into, say, content programming decisions. That all evaporated with the rise of consumerism and much greater competition in the form of an explosion of TV / video outlets.

One of my favorite things about the Internet is that I get to decide how I use it, where I go, what I produce, and what I consume. To me, it is the polar opposite of TV or any other feed-based outlet. These themes of self-direction and choice were widely shared among early Internet users, and it’s not a coincidence that those same people are also willing to take the more difficult and less convenient path if it unlocks the reward they want – because they had to on the early Internet (auto-play wouldn’t arrive for decades). Yet these days I see so many people – well-meaning, honest, good people – behaving as though they don’t have any choice, and even aligning their own goals with the goals of the very corporations that put them in an unfavorable position. I see this as a direct projection of the world-view that has shaped at least the US economy and culture for the last ~50 years or so, so… ya know… “f*ck you very much, corporate America!”. (side note: yes, I’m absolutely a Youtube Red subscriber).

In closing, a few final thoughts:

  • Support the creators you care about with currency that is actual money, not ‘engagement’ or any other non-money currency.
  • If you are a creator, think about how you can directly earn currency-money yourself, if you aren’t already. It’s not supposed to be easy. If it seems easy, you are probably getting jacked.
  • The biggest scam on the whole Internet is how people were led to believe that something is free if they don’t pay for it in currency-money. Nothing is free when businesses are involved. They’re also very good at hiding the value of whatever kind of non-money currency is being extracted from users, which is terrible because users then have no way to even make a value judgement by asking themselves “is this cost-benefit trade-off worthwhile for me?”, so now the whole idea that you CAN and SHOULD make that choice is starting to disappear.
  • For a thorough and lucid exploration of these and related concepts, I can’t recommend the documentary “The Century of the Self” highly enough. I mean this so sincerely that I’m not even going to link to it :)
Posted in bit bucket | 1 Comment

A Low But Significant Bar

A friend got a 4th gen AppleTV in late 2015 – a friend that enjoys retro video games, but that doesn’t own any consoles. I tried Provenance on my iPad and was pretty satisfied, and figured it would be pretty nifty on the AppleTV. This friend is not a developer, and at the time didn’t own a computer new enough to interface with a modern iOS device. Even if she did, she objected to paying $100 / year for the permission to run a self-built app (an objection I share), so I figured I would handle the building and installing part during one of my occasional visits to her town (~6 hours away by car).

You’re probably already thinking about how this is likely to blow up, and you’re right: roughly 1 year after I deployed the app to the AppleTV, the provisioning profile expired, and then nobody could play pokemon anymore. I felt truly terrible about this. DRM sucks.

Then just a few days ago, the black macbook was replaced with one of the fancy new ones – new enough to speak directly to an AppleTV, new enough to run the current version of Xcode. With the help of a reverse SSH tunnel, I screen shared to her machine to build and install the current version of Provenance, only to find out that the current version uses a different bundle ID. I had no luck convincing Xcode to replace the old Provenance app bundle with the newly built one, using the existing data container.

To my pleasant surprise, Xcode’s Download / Replace Container feature actually saved the day, and I was able to export the ~2 GB container from the old broken-for-years Provenance version, and hand that container to the new Provenance version. This is sort of a bare minimum level of data portability, but it’s more than I expected, so “props”. My friend is pretty excited about picking up where she left off with the pokemans!

Posted in bit bucket, development, Pro Tip | Leave a comment

Silence sandbox log spam (or: Why is sandbox logging ALLOWED access?!)

I’ve been annoyed by sandbox log verbosity since always, but recently I was pushed over the edge when playing with a tool (htop) that calls task_for_pid a lot. It’s open source, so not code signed or entitled. There are various ways to allow the calls to succeed (e.g. run as root, or add -p to taskgated‘s args and run htop setgid procmod), however this does nothing to alleviate the log spam, because ALLOWED access is still logged – sometimes by both kernel and sandboxd. If you’re making a lot of ‘allowed’ calls, this drives syslogd CPU usage up into the noticeable range. In fact on an otherwise idle system running htop (with -d 5), this effect results in syslogd being the busiest process on the system! Not ok. No love for the boy who cried “no wolf”.

Here is some medicine:

# /etc/asl.conf rules, placed above 'Rules for /var/log/system.log'
? [= Sender kernel] [= Facility kern] [N= Level 5] [Z= Message allow(0) mach-priv-task-port] ignore
? [= Sender sandboxd] [= Facility com.apple.sandbox] [N= Level 5] [Z= Message allow mach-priv-task-port] ignore

This cuts syslogd CPU usage by about 50% in my testing. Of course I would prefer that these messages were never sent, but it’s an improvement. Note that trunk htop has mitigated this problem by caching (and not retrying) denied attempts, but there’s nothing htop can do about the spam from *allowed* attempts.

I should mention that I’m not allergic to sandbox or policy enforcement in general. This is more of a ‘living in harmony’ kind of thing, and although there are serious ownership-related existential questions breaking through the surface with increasing frequency, this post isn’t about that.

Except for the next sentence. As a thought experiment, see if you can come up with any justification for logging these ‘allow’ messages that benefits the user, and that outweighs both the potential performance impact (read: battery, if you are rolling your eyes right now) and the signal to noise ratio impact.

I know that I’m one thousand years old for looking at log files in the first place (especially when the house *isn’t* on fire), and I’m ok with that. I might even assert that a person could build a career by curiously reading everything the system says.

Posted in OS X, Pro Tip | Leave a comment

Troubleshooting the Adaptive Firewall in OS X Server

Recently I did some spelunking into the Adaptive Firewall facility of OS X Server to devise a procedure for troubleshooting a reported failure of AF to block failed SSH logins. Consider this a supplement to this post at krypted. (although do note that the hb_summary tool mentioned there seems to be defunct now).

  • 1) Verify that AdaptiveFirewall (AF) is actually enabled. The “Adaptive” part is what reacts to events such as login failures; I mention this because adding a block rule manually using afctl is roughly equivalent to adding a block rule in pf, and even if this block rule takes effect (because pf is enabled), that does not imply that AdaptiveFirewall is enabled.
  • 2) AF doesn’t detect the events itself; it relies on Event Monitor (emond) for this. Verify that emond is seeing the activity in question.
  • Verify that AF is creating the correct rules in pf based on what it learns from emond.

First, create the following shell alias to allow easy invocation of afctl:

alias afctl='/Applications/Server.app/Contents/ServerRoot/usr/libexec/afctl'


Verify that AF is enabled

Check to see if AF’s launchd job is running. You should see the com.apple.afctl job listed.

bash-3.2# launchctl list | grep afctl
- 0 com.apple.afctl

If it’s not listed, re-initialize AF. This doesn’t destroy any state. Make sure it exits zero (no error).

bash-3.2# afctl -c ; echo $?

Re-enable any previously disabled rules, check exit status.

bash-3.2# afctl -e ; echo $?

Force AF into active state, check exit status. Don’t be scared by the pfctl boilerplate about the -f option.

bash-3.2# afctl -f ; echo $?
pfctl: Use of -f option, could result in flushing of rules
present in the main ruleset added by the system at startup.
See /etc/pf.conf for further details.
No ALTQ support in kernel
ALTQ related functions disabled


Verify that emond is seeing the auth failure events

Configure emond to do some additional logging. Edit /etc/emond/emond.plist to increase the debugLevel to 4 and set logEvent to true, as shown below:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">

After making the above change, run: sudo killall emond. There is now an additional log in /Library/Logs/EventMonitor (EventMonitor.event.log), and both that and the error.log now contain more verbose information. Watch these files with tail -f to see ongoing activity. Note that for arcane reasons, a single failed SSH attempt actually results in multiple detected auth failures.

You can also look at /etc/emond.d/state, which is only written upon reception of SIGTERM. The state file lists all the hosts that have attempted to connect to a protected service, along with the count of failed auths. Successful logins are indicated by a bad auth count of zero.


Verify correct rules in pf

pf rules associated with AF are all rooted under a pf anchor (anchor is pf’s word for ‘group’) called com.apple/400.AdaptiveFirewall. Show the active pf rules under this anchor:

bash-3.2# pfctl -s Anchors -a com.apple/400.AdaptiveFirewall -s rules -v
No ALTQ support in kernel
ALTQ related functions disabled
block drop in quick from <blockedHosts> to any
 [ Evaluations: 31705 Packets: 0 Bytes: 0 States: 0 ]
 [ Inserted: uid 0 pid 22564 ]

(note that the ‘evaluations’ counter should be non-zero; if it’s zero that likely means pf isn’t enabled; afctl -f is supposed to do that)

bash-3.2# pfctl -s info
No ALTQ support in kernel
ALTQ related functions disabled
Status: Enabled for 0 days 00:01:31           Debug: Urgent

State Table                          Total             Rate
  current entries                        0               
  searches                         2034928        22361.8/s
  inserts                                0            0.0/s
  removals                               0            0.0/s
  match                             999161        10979.8/s
  bad-offset                             0            0.0/s
  fragment                               0            0.0/s
  short                                  0            0.0/s
  normalize                              0            0.0/s
  memory                                 0            0.0/s
  bad-timestamp                          0            0.0/s
  congestion                             0            0.0/s
  ip-option                            418            4.6/s
  proto-cksum                            0            0.0/s
  state-mismatch                         0            0.0/s
  state-insert                           0            0.0/s
  state-limit                            0            0.0/s
  src-limit                              0            0.0/s
  synproxy                               0            0.0/s
  dummynet                               0            0.0/s

If afctl -f doesn’t enable pf, that’s a bug. If this is the case, you could try manually enabling pf. If it’s already enabled, it says so:

bash-3.2# pfctl -e
No ALTQ support in kernel
ALTQ related functions disabled
pfctl: pf already enabled

pf uses ‘tables’ to efficiently store data associated with rules that only differ by a single element (such as IP address). Show the list of pf tables under the AF anchor:

bash-3.2# pfctl -a com.apple/400.AdaptiveFirewall -s Tables -vvv
No ALTQ support in kernel
ALTQ related functions disabled
-pa-r-	blockedHosts	com.apple/400.AdaptiveFirewall
	Addresses:   0
	Cleared:     Fri Mar 25 11:38:30 2016
	References:  [ Anchors: 0                  Rules: 1                  ]
	Evaluations: [ NoMatch: 529189             Match: 141                ]
	In/Block:    [ Packets: 141                Bytes: 15909              ]
	In/Pass:     [ Packets: 0                  Bytes: 0                  ]
	In/XPass:    [ Packets: 0                  Bytes: 0                  ]
	Out/Block:   [ Packets: 0                  Bytes: 0                  ]
	Out/Pass:    [ Packets: 0                  Bytes: 0                  ]
	Out/XPass:   [ Packets: 0                  Bytes: 0                  ]

Show the contents of the blockedHosts table in the AF anchor. In the below output, I manually added using afctl, and x.x.x.x is a redacted address that was automatically added by AF due to failed SSH login attempts.

bash-3.2# pfctl -a com.apple/400.AdaptiveFirewall -t blockedHosts -T show -vvv
No ALTQ support in kernel
ALTQ related functions disabled
	Cleared:     Fri Mar 25 13:26:12 2016
	In/Block:    [ Packets: 0                  Bytes: 0                  ]
	In/Pass:     [ Packets: 0                  Bytes: 0                  ]
	Out/Block:   [ Packets: 0                  Bytes: 0                  ]
	Out/Pass:    [ Packets: 0                  Bytes: 0                  ]
	Cleared:     Fri Mar 25 14:15:38 2016
	In/Block:    [ Packets: 8                  Bytes: 1088               ]
	In/Pass:     [ Packets: 0                  Bytes: 0                  ]
	Out/Block:   [ Packets: 0                  Bytes: 0                  ]
	Out/Pass:    [ Packets: 0                  Bytes: 0                  ]

… I think that’s pretty much everything, except for some errata:

* Starting from a clean slate, you can get the failed auth counter for a given sending host up to 25 very quickly. At that point, the block rule is created and lasts for 15 minutes by default. No failed auths happen from that host in this 15 minute window, because the sending host is blocked and can’t reach sshd. After the 15 minute interval, the block rule is removed. An additional failed auth earns the sending host another 15 minute block rule. The bad auth counter is only reset by a successful login from that host.

* A block rule is only created once there have been 25 failed auths from the same IP address. This value is configurable with afctl. There is no time window associated with this policy. Therefore, a botnet with 100 hosts would be able to attempt 100 * 25 SSH auths against your server. As there is no reliable way to know that you’re being hit by a botnet, AF cannot help you guard against this except by reducing the failed auth count threshold required for a block rule.

Posted in OS X | Leave a comment

eGPU intrigue

As we consider a mac user’s renewed quest for GPU performance – this time for an ‘external’ GPU in a tbolt2 PCI chassis – we find similarities to other timeless quests. For instance, in our quest, the path is not clear in the beginning, and there is conflicting advice about how to proceed. The establishment is against us, and success may be temporary due to an ever-shifting and occasionally hostile landscape. The journey is fraught with peril, and you fully expect impasses surmountable only through deep soul searching and great courage. Also we have to assemble items from the marketplace and maybe perform a heroic deed.

My first thought was that tbolt2 wouldn’t have the bandwidth to let a fast GPU to shine. While it is true that a fast card would be limited by tb2, it’s still totally fast enough to outperform the MacPro6,1 d700s for many workloads. From the barefeats post:

Even when ‘hobbled’ by the limited bandwidth of Thunderbolt 2, the eGPU TITAN X ‘buries’ the AMD FirePro D700 on this OpenGL test.

The Diablo III results in that same post are even crazier. The new mac pro gets 78 fps with the internal d700 and 124 fps with a geforce titan x in an external tb2 chassis. Pretty not bad. Here’s the kicker: a six year old 2010 Mac Pro scores 167 fps (with the card installed in a legacy PCI slot. I mean a PCI slot. Heyo.)

My task is to pick a set of tradeoffs, optimizing in order for: performance, build simplicity and cleanliness, [ergo | cost]. The most commonly used thunderbolt2 chassis for hosting GPUs seems to be the Akitio unit, even though it’s a bit too small for many cards (but you can bend it and / or not close the back hatch), and the power supply is too weak to push a decent card. On the up side, the $200 – $300 price is comparatively low. The Akitio case doesn’t seem well equipped to power a fast GPU, and many of the builds I can find just have the components splayed out on the desk…

eGPUs all splayed out. credit: nesone from techinferno forums.

Others decide to transplant the Akitio board into a larger case with enough room for an ATX power supply and a full size card or two, and without leaving the back door open.


I once was told a very short story about how GPU drivers are all dens of mutual patent infringement, where everyone is guilty and they all just keep it ‘secret’ and carry on. Doing unsupported things with GPUs tends to require some negotiating with and gentle coercion of your computer system. Learning the secret handshakes in the first place is a mysterious business, and it’s easy to imagine all manner of unsavory behavior and sundry cut-throat affairs in this trade.

Our quest has produced one such tale already. There’s a person called netkas who did a lot of the groundwork in bootstrapping the “eGPU” scene… operated a forum, was responsive to people, helped them build their rigs, etc. Netkas then offers a service where if you provide diagnostic details from your system while your eGPU stuff is plugged in, you will be told whether your rig is viable. If it is viable, for the price of $20 you will be taught to sing the haunting melody that subdues OS X and brings your GPU to life. This paid service seems like a reasonable way for customers to support the ongoing work of playing cat and mouse with the vendors. People were grateful.

Very shortly after this service started and began yielding happy customers, it is said that a rival player known as goalque (seemingly well regarded by his side of the internet) inspected the work of netkas and generalized it into a rather burly shell script that now sits in goalque’s github repo. It may be executed by anyone for no fee, much to the continued frustration of the netkas camp. The feud lives on, with the scene’s two popular forums (netkas and techinfernal) appearing to come down on opposing sides, all of which is completely inconsequential to the users, who are all either stoked to pay $20 to netkas to light up their rig, or stoked to run a shell script from github to light up their rig.

Posted in mac pro | Leave a comment

How to rescue aborted QuickTime Player audio recordings

Know that feeling when you remember you don’t have your keys while you are closing a locked door? It’s the same feeling as when you use QuickTime Player to record some lengthy audio, and you remember you didn’t stop the recording while you are putting the laptop to sleep. I can’t help with the first problem, but after having lost a couple QuickTime recordings this way, I put on my virtual deerstalker and got to work.

The 10 minute version of this report is here: https://www.youtube.com/watch?v=N0Ec7zMyXQ8

… and the 30 second version is here: https://gist.github.com/dreness/e61fb16dcb831adaf6ff#file-fix-aifc-sh

Posted in media, OS X, Pro Tip, scripts, The More You Know, tutorials | Leave a comment

Spotlight, UserFilesOnly, kMDItemSupportFileType, and MDSystemFile

Recently, VMWare Fusion stopped appearing in Spotlight results. Other queries return expected results, and the spotlight index info for Fusion appears OK at first glance via:

mdls "/Applications/VMWare Fusion.app"

What’s going on, then? To get a different perspective, I tried a Spotlight search in Finder:

before… and then saved the results, and examined the resulting XML file with Property List Editor:

query plist editorIt seems there are additional filters in this search that aren’t accounted for in the UI (which is typical Apple, but I digress) such as FinderFilesOnly and UserFilesOnly. I decided to try running the raw query without those extra filters using mdfind, and sure enough:

$ mdfind '(** = "vmware fusion*"cdw) && (kMDItemContentTypeTree=com.apple.application)'
/Applications/VMware Fusion.app

A closer look at the mdls output for VMWare Fusion.app reveals the culprit:

$ mdls -name kMDItemSupportFileType /Applications/VMware\ Fusion.app
kMDItemSupportFileType = (

Kill it with overwriting but not deleting:

$ sudo xattr -w com.apple.metadata:kMDItemSupportFileType "" /Applications/VMware\ Fusion.app
$ mdls -name kMDItemSupportFileType /Applications/VMware\ Fusion.app
kMDItemSupportFileType = (null)

… and now everything’s OK again:

Posted in OS X | Leave a comment

Traffic micro-management: limit network bandwidth used by an OS X process

Problem: you have some silly uploader app that only knows one speed: maximum. You would love a way to make that app back off so it doesn’t saturate your uplink and badly impact the other things using your network. Previously in OS X, you would accomplish this using ipfw, but these days you’d use pfctl and dnctl.

None of that is breaking news. The reason for this post is that I thought of an easy way to make the limits apply only to a specific app’s traffic, and all of that traffic. Traditionally you’d have to identify the traffic using some combination of source or destination IP address or port, which can get quite cumbersome. PF also supports matching packets by uid and gid. Let’s use gid, and then run the target app with a custom gid :)

First, make a new unix group called throttle:

sudo dseditgroup -o create throttle

Next, give yourself permission to run things with the group id of the throttle group. To do this, edit /etc/sudoers to add a line such as this, substituting your actual user name.

username ALL=(ALL:throttle) ALL

To test the new group and sudo configuration, run ‘id -g’, and then again with a custom group id (using sudo’s -g option). The results should be different.

$ id -g
$ sudo -g throttle id -g

Now we can instantiate the requisite PF and dummynet setup. Here’s a little script that wants to be run as root.


# Reset dummynet to default config
dnctl -f flush

# Compose an addendum to the default config to create a new anchor
read -d '' -r PF <<EOF
dummynet-anchor "throttle"
anchor "throttle"

# Reset PF to default config and apply our addendum
(cat /etc/pf.conf && echo "$PF") | pfctl -q -f -

# Configure the new anchor
cat <<EOF | pfctl -q -a throttle -f -
no dummynet quick on lo0 all
dummynet out proto tcp group throttle pipe 1
dummynet out proto udp group throttle pipe 1

# Create the dummynet queue - adjust speed as desired
dnctl pipe 1 config bw 1Mbit/s

# Show new configs
printf "\nGlobal pf dummynet anchors:\n"
pfctl -q -s dummynet
printf "\nthrottle anchor config:\n"
pfctl -q -s dummynet -a throttle
printf "\ndummynet config:\n"
dnctl show queue

# Enable PF
pfctl -E

Finally, start the target app as follows:

sudo -g throttle open -a "Whatever"

To watch the counters on the bandwidth limiting queue:

sudo dnctl show queue

To clear all PF config / state and reset PF to defaults:

sudo pfctl -F all ; sudo pfctl -q -f /etc/pf.conf

NB: prior to 10.11.2, use of “pfctl -F all” might kernel panic your machine. This post was made public in December, but I wrote it back in August and filed the kernel panic bug then, which is fixed as of 10.11.2. Unfortunately, the bug goes back at least as far as 12F45…

To wrap up, keep in mind that this is a hack. What we’re really trying to accomplish requires both a privileged position on the network and also more expressive and fine-grained traffic controls than are provided by dummynet. As both of those things are not always available, doing dumb rate limiting on an individual host as documented above can still be useful.

While we’re here, let’s take a moment to illustrate the difference between dumb host-level rate limits (which I am derisively referring to as “traffic micro-management”) and proper traffic shaping at the network edge. Two primary goals of traffic shaping are: 1) avoid congestion at the bottleneck(s), which is typically your internet connection, and 2) maximize network utilization. Congestion happens when packets traversing the bottleneck get piled up (‘queued’) and have to wait a relatively long time to get through. This results in high ping times (known on Twitch as: “high ms”), in-game lag, and generally sluggish performance of any software that directly or indirectly uses the Internet. The way to avoid congestion at the bottleneck is to avoid sending more traffic than can pass through the bottleneck without incurring substantial additional delay. We also want to utilize as much of the bottleneck’s capacity as possible. These are somewhat opposed goals.

As an analogy for the two goals of avoiding congestion and maximizing throughput, imagine pouring stuff in a funnel as fast as possible without allowing any stuff to accumulate in the funnel.

Now imagine performing the above experiment again, but with multiple people pouring stuff in the funnel, each starting and stopping at random times with no coordination between them. Chances are good that the multi-user experiment won’t do as well at avoiding congestion and /or maximizing throughput. For the same reason, the only way to reliably do traffic shaping to avoid network congestion is to do it at the edge of the network (such as a router), where traffic shaping policies can account for and apply to all traffic that traverses the bottleneck.

Posted in OS X, scripts, The More You Know, tutorials | 3 Comments

Nuke and pave of pfSense on the SG-2440

I may not be the first to deal with the fallout of filesystem corruption on an SG-2240 running pfSense 2.2 due to sudden power loss, but I might be the first to put the cliff notes of the recovery process in one place.

The first obvious symptom of trouble is the web admin throwing http 500 and 503 errors. Research reveals that this problem is not completely rare, and is often caused by unclean shutdowns resulting in filesystem damage. If you’re feeling unsettled about why a tiny fan-less network appliance such as this would be so grumpy about power loss, be advised that pfSense has another mode where things can lose power safely because the ‘non-volatile’ file systems like / are mounted r/o, and volatile ones like /var/log, /tmp, /var/run are on ramdisk. The SG-2240 does not use this mode by default, it uses the ‘full install’ mode, which behaves much more like a standard FreeBSD system, so you’re supposed to shut it down like a nice person. Read up on the difference between the pfSense “full install” and “nanobsd” configurations.

Since the web admin is dead, to diagnose this further we’ll use the console port.

Accessing pfSense console port from OS X

Accessing pfSense console port from Windows

  • Connect mini-usb cable between pfsense console port and windows machine.
  • install USB to COM bridge driver found here: http://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx
  • open Device Manager -> Ports
  • locate Silicon Labs USB bridge COM listing. Note the number after COM, e.g. COM3
  • boot firewall
  • fire up Putty, make a new serial connection with a speed of 115200 using the COM port discovered previously
  • press enter. You should have a root shell.

Once consoled in, I ran /etc/rc.initial to use the ‘Restart PHP-FPM’ command to try to reboot the web stuff, as I read this worked for some folks. It emitted some nonsense about not knowing what the wheel group means. A cursory glance around town shows that /etc/group, /etc/passwd, and /etc/master.passwd are all munged. Not good.

Reinstall pfSense

  • Download a memstick image from pfsense. Choose the ‘netgate’ option from the Computer Architecture menu, since apparently the SG-2440 is a netgate.
  • Prepare a USB stick with install media.
    • Insert a USB stick (into your workstation, in this case a mac) that you don’t mind erasing.
    • If any filesystems on the usb stick are mounted, unmount them (but do not eject the device) – you can do this with Disk Utility by selecting the volumes and clicking “Unmount”.
    • Find the USB stick device number with: diskutil list
    • Wipe the partition table on the USB stick with dd:
      sudo dd if=/dev/zero of=/dev/disk3 bs=1m count=1

      (assuming the USB stick is /dev/disk3)

    • Copy the image to the device:
      gzcat pfSense-memstick-ADI-2.2.2-RELEASE-amd64.img.gz | \
      sudo dd of=/dev/disk3 bs=16k
  • Eject the USB stick and insert it into one of the pfSense USB ports
  • Boot the pfSense box.
  • Shortly after boot, you are prompted to press F12 if you want a boot menu. Do it.
  • You should now see a list of storage devices; select the USB stick.
  • Let the next menu pass you by; don’t choose anything:
  • After a bit more booting, you will be given the chance to press ‘i’ to run the installer. Do that.
  • From the next menu, accept the console settings.
  • Choose “Custom Install”
  • Select the Generic Ultra HS-Combo Disk as the target for the installation
  • Choose “Format this Disk”
  • Choose “Use this Geometery”
  • Format da1
  • Skip the custom partitioning step
  • Accept and install Bootblocks
  • Select the internal drive
  • Accept and Create
  • Watch the progress window
  • Embedded Kernel
  • Reboot
  • No VLANs
  • Name each of the four network interfaces igb0, igb1, igb2, igb3 for WAN, LAN, OPT1, OPT2 respectively.
  • Type ‘y’ to finish.
pfSense (pfSense) 2.2.2-RELEASE amd64 Mon Apr 13 20:10:22 CDT 2015
Bootup complete
FreeBSD/amd64 (pfSense.localdomain) (ttyu1)
*** Welcome to pfSense 2.2.2-RELEASE-pfSense (amd64) on pfSense **
 WAN (wan) -> igb0 -> 
 LAN (lan) -> igb1 -> v4:
 OPT1 (opt1) -> igb2 -> 
 OPT2 (opt2) -> igb3 -> 
 0) Logout (SSH only)               9) pfTop
 1) Assign Interfaces              10) Filter Logs
 2) Set interface(s) IP address    11) Restart webConfigurator
 3) Reset webConfigurator password 12) pfSense Developer Shell
 4) Reset to factory defaults      13) Upgrade from console
 5) Reboot system                  14) Enable Secure Shell (sshd)
 6) Halt system                   15) Restore recent configuration
 7) Ping host 16) Restart PHP-FPM
 8) Shell
Enter an option:
Posted in bit bucket | Leave a comment

PKTAP extensions to tcpdump in OS X

The tcpdump man page in OS X contains various references to something called PKTAP, such as in the documentation for the -k option:

 Control the display of packet metadata via an optional meta-
 data_arg argument. This is useful when displaying packet saved
 in the pcap-ng file format or with interfaces that support the
 PKTAP data link type.

 By default, when the metadata_arg optional argument is not spec-
 ified, any available packet metadata information is printed out.

 The metadata_arg argument controls the display of specific
 packet metadata information using a flag word, where each char-
 acter corresponds to a type of packet metadata as follows:

 I interface name (or interface ID)
 N process name
 P process ID
 S service class
 D direction
 C comment

 This is an Apple modification.

This sounds like fun, but my attempts to use this were foiled by the fact that none of my interfaces support the PKTAP data link type.

If I had searched the man page for other references to PKTAP, I would have learned that tcpdump can create a ‘virtual’ PKTAP interface that wraps a specified list of other interfaces. All those other interfaces are visible through this PKTAP interface, and all the associated metadata is available for viewing / filtering.

e.g. to view only packets sent or received from ssh processes, and also view the additional metadata (-k)

andre@flux [~] % sudo tcpdump -tknq -i pktap,en0 -Q "proc =ssh" 
tcpdump: data link type PKTAP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pktap,en0, link-type PKTAP (Packet Tap), capture size 65535 bytes
(en0, proc ssh:44637, svc BE, in) IP > tcp 180
(en0, proc ssh:44637, svc CTL, out) IP > tcp 0

To simply view all of the PKTAP metadata on all packets, try something like the following (substituting en0 for your active interface(s)):

sudo tcpdump -q -n -i pktap,en0 -k

The PACKET METADATA FILTER section of the man page describes the various filtering controls.

It seems like this PKTAP stuff is used by default when doing packet captures on iOS using the provided tools. Wireshark also supports PKTAP, and had a few words about Apple’s implementation :)

Posted in OS X, Pro Tip, The More You Know | 1 Comment