A Low But Significant Bar

A friend got a 4th gen AppleTV in late 2015 – a friend that enjoys retro video games, but that doesn’t own any consoles. I tried Provenance on my iPad and was pretty satisfied, and figured it would be pretty nifty on the AppleTV. This friend is not a developer, and at the time didn’t own a computer new enough to interface with a modern iOS device. Even if she did, she objected to paying $100 / year for the permission to run a self-built app (an objection I share), so I figured I would handle the building and installing part during one of my occasional visits to her town (~6 hours away by car).

You’re probably already thinking about how this is likely to blow up, and you’re right: roughly 1 year after I deployed the app to the AppleTV, the provisioning profile expired, and then nobody could play pokemon anymore. I felt truly terrible about this. DRM sucks.

Then just a few days ago, the black macbook was replaced with one of the fancy new ones – new enough to speak directly to an AppleTV, new enough to run the current version of Xcode. With the help of a reverse SSH tunnel, I screen shared to her machine to build and install the current version of Provenance, only to find out that the current version uses a different bundle ID. I had no luck convincing Xcode to replace the old Provenance app bundle with the newly built one, using the existing data container.

To my pleasant surprise, Xcode’s Download / Replace Container feature actually saved the day, and I was able to export the ~2 GB container from the old broken-for-years Provenance version, and hand that container to the new Provenance version. This is sort of a bare minimum level of data portability, but it’s more than I expected, so “props”. My friend is pretty excited about picking up where she left off with the pokemans!

Posted in bit bucket, development, Pro Tip | Leave a comment

Silence sandbox log spam (or: Why is sandbox logging ALLOWED access?!)

I’ve been annoyed by sandbox log verbosity since always, but recently I was pushed over the edge when playing with a tool (htop) that calls task_for_pid a lot. It’s open source, so not code signed or entitled. There are various ways to allow the calls to succeed (e.g. run as root, or add -p to taskgated‘s args and run htop setgid procmod), however this does nothing to alleviate the log spam, because ALLOWED access is still logged – sometimes by both kernel and sandboxd. If you’re making a lot of ‘allowed’ calls, this drives syslogd CPU usage up into the noticeable range. In fact on an otherwise idle system running htop (with -d 5), this effect results in syslogd being the busiest process on the system! Not ok. No love for the boy who cried “no wolf”.

Here is some medicine:

# /etc/asl.conf rules, placed above 'Rules for /var/log/system.log'
? [= Sender kernel] [= Facility kern] [N= Level 5] [Z= Message allow(0) mach-priv-task-port] ignore
? [= Sender sandboxd] [= Facility com.apple.sandbox] [N= Level 5] [Z= Message allow mach-priv-task-port] ignore

This cuts syslogd CPU usage by about 50% in my testing. Of course I would prefer that these messages were never sent, but it’s an improvement. Note that trunk htop has mitigated this problem by caching (and not retrying) denied attempts, but there’s nothing htop can do about the spam from *allowed* attempts.

I should mention that I’m not allergic to sandbox or policy enforcement in general. This is more of a ‘living in harmony’ kind of thing, and although there are serious ownership-related existential questions breaking through the surface with increasing frequency, this post isn’t about that.

Except for the next sentence. As a thought experiment, see if you can come up with any justification for logging these ‘allow’ messages that benefits the user, and that outweighs both the potential performance impact (read: battery, if you are rolling your eyes right now) and the signal to noise ratio impact.

I know that I’m one thousand years old for looking at log files in the first place (especially when the house *isn’t* on fire), and I’m ok with that. I might even assert that a person could build a career by curiously reading everything the system says.

Posted in OS X, Pro Tip | Leave a comment

Troubleshooting the Adaptive Firewall in OS X Server

Recently I did some spelunking into the Adaptive Firewall facility of OS X Server to devise a procedure for troubleshooting a reported failure of AF to block failed SSH logins. Consider this a supplement to this post at krypted. (although do note that the hb_summary tool mentioned there seems to be defunct now).

  • 1) Verify that AdaptiveFirewall (AF) is actually enabled. The “Adaptive” part is what reacts to events such as login failures; I mention this because adding a block rule manually using afctl is roughly equivalent to adding a block rule in pf, and even if this block rule takes effect (because pf is enabled), that does not imply that AdaptiveFirewall is enabled.
  • 2) AF doesn’t detect the events itself; it relies on Event Monitor (emond) for this. Verify that emond is seeing the activity in question.
  • Verify that AF is creating the correct rules in pf based on what it learns from emond.

First, create the following shell alias to allow easy invocation of afctl:

alias afctl='/Applications/Server.app/Contents/ServerRoot/usr/libexec/afctl'


Verify that AF is enabled

Check to see if AF’s launchd job is running. You should see the com.apple.afctl job listed.

bash-3.2# launchctl list | grep afctl
- 0 com.apple.afctl

If it’s not listed, re-initialize AF. This doesn’t destroy any state. Make sure it exits zero (no error).

bash-3.2# afctl -c ; echo $?

Re-enable any previously disabled rules, check exit status.

bash-3.2# afctl -e ; echo $?

Force AF into active state, check exit status. Don’t be scared by the pfctl boilerplate about the -f option.

bash-3.2# afctl -f ; echo $?
pfctl: Use of -f option, could result in flushing of rules
present in the main ruleset added by the system at startup.
See /etc/pf.conf for further details.
No ALTQ support in kernel
ALTQ related functions disabled


Verify that emond is seeing the auth failure events

Configure emond to do some additional logging. Edit /etc/emond/emond.plist to increase the debugLevel to 4 and set logEvent to true, as shown below:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">

After making the above change, run: sudo killall emond. There is now an additional log in /Library/Logs/EventMonitor (EventMonitor.event.log), and both that and the error.log now contain more verbose information. Watch these files with tail -f to see ongoing activity. Note that for arcane reasons, a single failed SSH attempt actually results in multiple detected auth failures.

You can also look at /etc/emond.d/state, which is only written upon reception of SIGTERM. The state file lists all the hosts that have attempted to connect to a protected service, along with the count of failed auths. Successful logins are indicated by a bad auth count of zero.


Verify correct rules in pf

pf rules associated with AF are all rooted under a pf anchor (anchor is pf’s word for ‘group’) called com.apple/400.AdaptiveFirewall. Show the active pf rules under this anchor:

bash-3.2# pfctl -s Anchors -a com.apple/400.AdaptiveFirewall -s rules -v
No ALTQ support in kernel
ALTQ related functions disabled
block drop in quick from <blockedHosts> to any
 [ Evaluations: 31705 Packets: 0 Bytes: 0 States: 0 ]
 [ Inserted: uid 0 pid 22564 ]

(note that the ‘evaluations’ counter should be non-zero; if it’s zero that likely means pf isn’t enabled; afctl -f is supposed to do that)

bash-3.2# pfctl -s info
No ALTQ support in kernel
ALTQ related functions disabled
Status: Enabled for 0 days 00:01:31           Debug: Urgent

State Table                          Total             Rate
  current entries                        0               
  searches                         2034928        22361.8/s
  inserts                                0            0.0/s
  removals                               0            0.0/s
  match                             999161        10979.8/s
  bad-offset                             0            0.0/s
  fragment                               0            0.0/s
  short                                  0            0.0/s
  normalize                              0            0.0/s
  memory                                 0            0.0/s
  bad-timestamp                          0            0.0/s
  congestion                             0            0.0/s
  ip-option                            418            4.6/s
  proto-cksum                            0            0.0/s
  state-mismatch                         0            0.0/s
  state-insert                           0            0.0/s
  state-limit                            0            0.0/s
  src-limit                              0            0.0/s
  synproxy                               0            0.0/s
  dummynet                               0            0.0/s

If afctl -f doesn’t enable pf, that’s a bug. If this is the case, you could try manually enabling pf. If it’s already enabled, it says so:

bash-3.2# pfctl -e
No ALTQ support in kernel
ALTQ related functions disabled
pfctl: pf already enabled

pf uses ‘tables’ to efficiently store data associated with rules that only differ by a single element (such as IP address). Show the list of pf tables under the AF anchor:

bash-3.2# pfctl -a com.apple/400.AdaptiveFirewall -s Tables -vvv
No ALTQ support in kernel
ALTQ related functions disabled
-pa-r-	blockedHosts	com.apple/400.AdaptiveFirewall
	Addresses:   0
	Cleared:     Fri Mar 25 11:38:30 2016
	References:  [ Anchors: 0                  Rules: 1                  ]
	Evaluations: [ NoMatch: 529189             Match: 141                ]
	In/Block:    [ Packets: 141                Bytes: 15909              ]
	In/Pass:     [ Packets: 0                  Bytes: 0                  ]
	In/XPass:    [ Packets: 0                  Bytes: 0                  ]
	Out/Block:   [ Packets: 0                  Bytes: 0                  ]
	Out/Pass:    [ Packets: 0                  Bytes: 0                  ]
	Out/XPass:   [ Packets: 0                  Bytes: 0                  ]

Show the contents of the blockedHosts table in the AF anchor. In the below output, I manually added using afctl, and x.x.x.x is a redacted address that was automatically added by AF due to failed SSH login attempts.

bash-3.2# pfctl -a com.apple/400.AdaptiveFirewall -t blockedHosts -T show -vvv
No ALTQ support in kernel
ALTQ related functions disabled
	Cleared:     Fri Mar 25 13:26:12 2016
	In/Block:    [ Packets: 0                  Bytes: 0                  ]
	In/Pass:     [ Packets: 0                  Bytes: 0                  ]
	Out/Block:   [ Packets: 0                  Bytes: 0                  ]
	Out/Pass:    [ Packets: 0                  Bytes: 0                  ]
	Cleared:     Fri Mar 25 14:15:38 2016
	In/Block:    [ Packets: 8                  Bytes: 1088               ]
	In/Pass:     [ Packets: 0                  Bytes: 0                  ]
	Out/Block:   [ Packets: 0                  Bytes: 0                  ]
	Out/Pass:    [ Packets: 0                  Bytes: 0                  ]

… I think that’s pretty much everything, except for some errata:

* Starting from a clean slate, you can get the failed auth counter for a given sending host up to 25 very quickly. At that point, the block rule is created and lasts for 15 minutes by default. No failed auths happen from that host in this 15 minute window, because the sending host is blocked and can’t reach sshd. After the 15 minute interval, the block rule is removed. An additional failed auth earns the sending host another 15 minute block rule. The bad auth counter is only reset by a successful login from that host.

* A block rule is only created once there have been 25 failed auths from the same IP address. This value is configurable with afctl. There is no time window associated with this policy. Therefore, a botnet with 100 hosts would be able to attempt 100 * 25 SSH auths against your server. As there is no reliable way to know that you’re being hit by a botnet, AF cannot help you guard against this except by reducing the failed auth count threshold required for a block rule.

Posted in OS X | Leave a comment

eGPU intrigue

As we consider a mac user’s renewed quest for GPU performance – this time for an ‘external’ GPU in a tbolt2 PCI chassis – we find similarities to other timeless quests. For instance, in our quest, the path is not clear in the beginning, and there is conflicting advice about how to proceed. The establishment is against us, and success may be temporary due to an ever-shifting and occasionally hostile landscape. The journey is fraught with peril, and you fully expect impasses surmountable only through deep soul searching and great courage. Also we have to assemble items from the marketplace and maybe perform a heroic deed.

My first thought was that tbolt2 wouldn’t have the bandwidth to let a fast GPU to shine. While it is true that a fast card would be limited by tb2, it’s still totally fast enough to outperform the MacPro6,1 d700s for many workloads. From the barefeats post:

Even when ‘hobbled’ by the limited bandwidth of Thunderbolt 2, the eGPU TITAN X ‘buries’ the AMD FirePro D700 on this OpenGL test.

The Diablo III results in that same post are even crazier. The new mac pro gets 78 fps with the internal d700 and 124 fps with a geforce titan x in an external tb2 chassis. Pretty not bad. Here’s the kicker: a six year old 2010 Mac Pro scores 167 fps (with the card installed in a legacy PCI slot. I mean a PCI slot. Heyo.)

My task is to pick a set of tradeoffs, optimizing in order for: performance, build simplicity and cleanliness, [ergo | cost]. The most commonly used thunderbolt2 chassis for hosting GPUs seems to be the Akitio unit, even though it’s a bit too small for many cards (but you can bend it and / or not close the back hatch), and the power supply is too weak to push a decent card. On the up side, the $200 – $300 price is comparatively low. The Akitio case doesn’t seem well equipped to power a fast GPU, and many of the builds I can find just have the components splayed out on the desk…

eGPUs all splayed out. credit: nesone from techinferno forums.

Others decide to transplant the Akitio board into a larger case with enough room for an ATX power supply and a full size card or two, and without leaving the back door open.


I once was told a very short story about how GPU drivers are all dens of mutual patent infringement, where everyone is guilty and they all just keep it ‘secret’ and carry on. Doing unsupported things with GPUs tends to require some negotiating with and gentle coercion of your computer system. Learning the secret handshakes in the first place is a mysterious business, and it’s easy to imagine all manner of unsavory behavior and sundry cut-throat affairs in this trade.

Our quest has produced one such tale already. There’s a person called netkas who did a lot of the groundwork in bootstrapping the “eGPU” scene… operated a forum, was responsive to people, helped them build their rigs, etc. Netkas then offers a service where if you provide diagnostic details from your system while your eGPU stuff is plugged in, you will be told whether your rig is viable. If it is viable, for the price of $20 you will be taught to sing the haunting melody that subdues OS X and brings your GPU to life. This paid service seems like a reasonable way for customers to support the ongoing work of playing cat and mouse with the vendors. People were grateful.

Very shortly after this service started and began yielding happy customers, it is said that a rival player known as goalque (seemingly well regarded by his side of the internet) inspected the work of netkas and generalized it into a rather burly shell script that now sits in goalque’s github repo. It may be executed by anyone for no fee, much to the continued frustration of the netkas camp. The feud lives on, with the scene’s two popular forums (netkas and techinfernal) appearing to come down on opposing sides, all of which is completely inconsequential to the users, who are all either stoked to pay $20 to netkas to light up their rig, or stoked to run a shell script from github to light up their rig.

Posted in mac pro | Leave a comment

How to rescue aborted QuickTime Player audio recordings

Know that feeling when you remember you don’t have your keys while you are closing a locked door? It’s the same feeling as when you use QuickTime Player to record some lengthy audio, and you remember you didn’t stop the recording while you are putting the laptop to sleep. I can’t help with the first problem, but after having lost a couple QuickTime recordings this way, I put on my virtual deerstalker and got to work.

The 10 minute version of this report is here: https://www.youtube.com/watch?v=N0Ec7zMyXQ8

… and the 30 second version is here: https://gist.github.com/dreness/e61fb16dcb831adaf6ff#file-fix-aifc-sh

Posted in media, OS X, Pro Tip, scripts, The More You Know, tutorials | Leave a comment

Spotlight, UserFilesOnly, kMDItemSupportFileType, and MDSystemFile

Recently, VMWare Fusion stopped appearing in Spotlight results. Other queries return expected results, and the spotlight index info for Fusion appears OK at first glance via:

mdls "/Applications/VMWare Fusion.app"

What’s going on, then? To get a different perspective, I tried a Spotlight search in Finder:

before… and then saved the results, and examined the resulting XML file with Property List Editor:

query plist editorIt seems there are additional filters in this search that aren’t accounted for in the UI (which is typical Apple, but I digress) such as FinderFilesOnly and UserFilesOnly. I decided to try running the raw query without those extra filters using mdfind, and sure enough:

$ mdfind '(** = "vmware fusion*"cdw) && (kMDItemContentTypeTree=com.apple.application)'
/Applications/VMware Fusion.app

A closer look at the mdls output for VMWare Fusion.app reveals the culprit:

$ mdls -name kMDItemSupportFileType /Applications/VMware\ Fusion.app
kMDItemSupportFileType = (

Kill it with overwriting but not deleting:

$ sudo xattr -w com.apple.metadata:kMDItemSupportFileType "" /Applications/VMware\ Fusion.app
$ mdls -name kMDItemSupportFileType /Applications/VMware\ Fusion.app
kMDItemSupportFileType = (null)

… and now everything’s OK again:

Posted in OS X | Leave a comment

Traffic micro-management: limit network bandwidth used by an OS X process

Problem: you have some silly uploader app that only knows one speed: maximum. You would love a way to make that app back off so it doesn’t saturate your uplink and badly impact the other things using your network. Previously in OS X, you would accomplish this using ipfw, but these days you’d use pfctl and dnctl.

None of that is breaking news. The reason for this post is that I thought of an easy way to make the limits apply only to a specific app’s traffic, and all of that traffic. Traditionally you’d have to identify the traffic using some combination of source or destination IP address or port, which can get quite cumbersome. PF also supports matching packets by uid and gid. Let’s use gid, and then run the target app with a custom gid :)

First, make a new unix group called throttle:

sudo dseditgroup -o create throttle

Next, give yourself permission to run things with the group id of the throttle group. To do this, edit /etc/sudoers to add a line such as this, substituting your actual user name.

username ALL=(ALL:throttle) ALL

To test the new group and sudo configuration, run ‘id -g’, and then again with a custom group id (using sudo’s -g option). The results should be different.

$ id -g
$ sudo -g throttle id -g

Now we can instantiate the requisite PF and dummynet setup. Here’s a little script that wants to be run as root.


# Reset dummynet to default config
dnctl -f flush

# Compose an addendum to the default config to create a new anchor
read -d '' -r PF <<EOF
dummynet-anchor "throttle"
anchor "throttle"

# Reset PF to default config and apply our addendum
(cat /etc/pf.conf && echo "$PF") | pfctl -q -f -

# Configure the new anchor
cat <<EOF | pfctl -q -a throttle -f -
no dummynet quick on lo0 all
dummynet out proto tcp group throttle pipe 1
dummynet out proto udp group throttle pipe 1

# Create the dummynet queue - adjust speed as desired
dnctl pipe 1 config bw 1Mbit/s

# Show new configs
printf "\nGlobal pf dummynet anchors:\n"
pfctl -q -s dummynet
printf "\nthrottle anchor config:\n"
pfctl -q -s dummynet -a throttle
printf "\ndummynet config:\n"
dnctl show queue

# Enable PF
pfctl -E

Finally, start the target app as follows:

sudo -g throttle open -a "Whatever"

To watch the counters on the bandwidth limiting queue:

sudo dnctl show queue

To clear all PF config / state and reset PF to defaults:

sudo pfctl -F all ; sudo pfctl -q -f /etc/pf.conf

NB: prior to 10.11.2, use of “pfctl -F all” might kernel panic your machine. This post was made public in December, but I wrote it back in August and filed the kernel panic bug then, which is fixed as of 10.11.2. Unfortunately, the bug goes back at least as far as 12F45…

To wrap up, keep in mind that this is a hack. What we’re really trying to accomplish requires both a privileged position on the network and also more expressive and fine-grained traffic controls than are provided by dummynet. As both of those things are not always available, doing dumb rate limiting on an individual host as documented above can still be useful.

While we’re here, let’s take a moment to illustrate the difference between dumb host-level rate limits (which I am derisively referring to as “traffic micro-management”) and proper traffic shaping at the network edge. Two primary goals of traffic shaping are: 1) avoid congestion at the bottleneck(s), which is typically your internet connection, and 2) maximize network utilization. Congestion happens when packets traversing the bottleneck get piled up (‘queued’) and have to wait a relatively long time to get through. This results in high ping times (known on Twitch as: “high ms”), in-game lag, and generally sluggish performance of any software that directly or indirectly uses the Internet. The way to avoid congestion at the bottleneck is to avoid sending more traffic than can pass through the bottleneck without incurring substantial additional delay. We also want to utilize as much of the bottleneck’s capacity as possible. These are somewhat opposed goals.

As an analogy for the two goals of avoiding congestion and maximizing throughput, imagine pouring stuff in a funnel as fast as possible without allowing any stuff to accumulate in the funnel.

Now imagine performing the above experiment again, but with multiple people pouring stuff in the funnel, each starting and stopping at random times with no coordination between them. Chances are good that the multi-user experiment won’t do as well at avoiding congestion and /or maximizing throughput. For the same reason, the only way to reliably do traffic shaping to avoid network congestion is to do it at the edge of the network (such as a router), where traffic shaping policies can account for and apply to all traffic that traverses the bottleneck.

Posted in OS X, scripts, The More You Know, tutorials | 3 Comments

Nuke and pave of pfSense on the SG-2440

I may not be the first to deal with the fallout of filesystem corruption on an SG-2240 running pfSense 2.2 due to sudden power loss, but I might be the first to put the cliff notes of the recovery process in one place.

The first obvious symptom of trouble is the web admin throwing http 500 and 503 errors. Research reveals that this problem is not completely rare, and is often caused by unclean shutdowns resulting in filesystem damage. If you’re feeling unsettled about why a tiny fan-less network appliance such as this would be so grumpy about power loss, be advised that pfSense has another mode where things can lose power safely because the ‘non-volatile’ file systems like / are mounted r/o, and volatile ones like /var/log, /tmp, /var/run are on ramdisk. The SG-2240 does not use this mode by default, it uses the ‘full install’ mode, which behaves much more like a standard FreeBSD system, so you’re supposed to shut it down like a nice person. Read up on the difference between the pfSense “full install” and “nanobsd” configurations.

Since the web admin is dead, to diagnose this further we’ll use the console port.

Accessing pfSense console port from OS X

Accessing pfSense console port from Windows

  • Connect mini-usb cable between pfsense console port and windows machine.
  • install USB to COM bridge driver found here: http://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx
  • open Device Manager -> Ports
  • locate Silicon Labs USB bridge COM listing. Note the number after COM, e.g. COM3
  • boot firewall
  • fire up Putty, make a new serial connection with a speed of 115200 using the COM port discovered previously
  • press enter. You should have a root shell.

Once consoled in, I ran /etc/rc.initial to use the ‘Restart PHP-FPM’ command to try to reboot the web stuff, as I read this worked for some folks. It emitted some nonsense about not knowing what the wheel group means. A cursory glance around town shows that /etc/group, /etc/passwd, and /etc/master.passwd are all munged. Not good.

Reinstall pfSense

  • Download a memstick image from pfsense. Choose the ‘netgate’ option from the Computer Architecture menu, since apparently the SG-2440 is a netgate.
  • Prepare a USB stick with install media.
    • Insert a USB stick (into your workstation, in this case a mac) that you don’t mind erasing.
    • If any filesystems on the usb stick are mounted, unmount them (but do not eject the device) – you can do this with Disk Utility by selecting the volumes and clicking “Unmount”.
    • Find the USB stick device number with: diskutil list
    • Wipe the partition table on the USB stick with dd:
      sudo dd if=/dev/zero of=/dev/disk3 bs=1m count=1

      (assuming the USB stick is /dev/disk3)

    • Copy the image to the device:
      gzcat pfSense-memstick-ADI-2.2.2-RELEASE-amd64.img.gz | \
      sudo dd of=/dev/disk3 bs=16k
  • Eject the USB stick and insert it into one of the pfSense USB ports
  • Boot the pfSense box.
  • Shortly after boot, you are prompted to press F12 if you want a boot menu. Do it.
  • You should now see a list of storage devices; select the USB stick.
  • Let the next menu pass you by; don’t choose anything:
  • After a bit more booting, you will be given the chance to press ‘i’ to run the installer. Do that.
  • From the next menu, accept the console settings.
  • Choose “Custom Install”
  • Select the Generic Ultra HS-Combo Disk as the target for the installation
  • Choose “Format this Disk”
  • Choose “Use this Geometery”
  • Format da1
  • Skip the custom partitioning step
  • Accept and install Bootblocks
  • Select the internal drive
  • Accept and Create
  • Watch the progress window
  • Embedded Kernel
  • Reboot
  • No VLANs
  • Name each of the four network interfaces igb0, igb1, igb2, igb3 for WAN, LAN, OPT1, OPT2 respectively.
  • Type ‘y’ to finish.
pfSense (pfSense) 2.2.2-RELEASE amd64 Mon Apr 13 20:10:22 CDT 2015
Bootup complete
FreeBSD/amd64 (pfSense.localdomain) (ttyu1)
*** Welcome to pfSense 2.2.2-RELEASE-pfSense (amd64) on pfSense **
 WAN (wan) -> igb0 -> 
 LAN (lan) -> igb1 -> v4:
 OPT1 (opt1) -> igb2 -> 
 OPT2 (opt2) -> igb3 -> 
 0) Logout (SSH only)               9) pfTop
 1) Assign Interfaces              10) Filter Logs
 2) Set interface(s) IP address    11) Restart webConfigurator
 3) Reset webConfigurator password 12) pfSense Developer Shell
 4) Reset to factory defaults      13) Upgrade from console
 5) Reboot system                  14) Enable Secure Shell (sshd)
 6) Halt system                   15) Restore recent configuration
 7) Ping host 16) Restart PHP-FPM
 8) Shell
Enter an option:
Posted in bit bucket | Leave a comment

PKTAP extensions to tcpdump in OS X

The tcpdump man page in OS X contains various references to something called PKTAP, such as in the documentation for the -k option:

 Control the display of packet metadata via an optional meta-
 data_arg argument. This is useful when displaying packet saved
 in the pcap-ng file format or with interfaces that support the
 PKTAP data link type.

 By default, when the metadata_arg optional argument is not spec-
 ified, any available packet metadata information is printed out.

 The metadata_arg argument controls the display of specific
 packet metadata information using a flag word, where each char-
 acter corresponds to a type of packet metadata as follows:

 I interface name (or interface ID)
 N process name
 P process ID
 S service class
 D direction
 C comment

 This is an Apple modification.

This sounds like fun, but my attempts to use this were foiled by the fact that none of my interfaces support the PKTAP data link type.

If I had searched the man page for other references to PKTAP, I would have learned that tcpdump can create a ‘virtual’ PKTAP interface that wraps a specified list of other interfaces. All those other interfaces are visible through this PKTAP interface, and all the associated metadata is available for viewing / filtering.

e.g. to view only packets sent or received from ssh processes, and also view the additional metadata (-k)

andre@flux [~] % sudo tcpdump -tknq -i pktap,en0 -Q "proc =ssh" 
tcpdump: data link type PKTAP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pktap,en0, link-type PKTAP (Packet Tap), capture size 65535 bytes
(en0, proc ssh:44637, svc BE, in) IP > tcp 180
(en0, proc ssh:44637, svc CTL, out) IP > tcp 0

To simply view all of the PKTAP metadata on all packets, try something like the following (substituting en0 for your active interface(s)):

sudo tcpdump -q -n -i pktap,en0 -k

The PACKET METADATA FILTER section of the man page describes the various filtering controls.

It seems like this PKTAP stuff is used by default when doing packet captures on iOS using the provided tools. Wireshark also supports PKTAP, and had a few words about Apple’s implementation :)

Posted in OS X, Pro Tip, The More You Know | 1 Comment

Newpro is boss

2011 was drawing to a close, and I was uneasy at the lack of a Mac Pro refresh. My 2009 MacPro4,1 was still performing admirably, but video workflows were starting to feel sluggish as I incorporated more high-frame-rate content. Tasks like video encoding were almost as fast on laptops shipped earlier in 2011 as they were on my Big Bad MacPro.

time avexporter -dest ~/t -replace -preset AVAssetExportPreset1920x1080 -source ~/Movies/wow.mov

# MacBookPro8,2 Intel Core i7 2820QM @ 2.3 Ghz / 32nm “Sandy Bridge” / Early 2011 (thor)
326.51s user 11.64s system 447% cpu 1:15.55 total
325.68s user 11.63s system 448% cpu 1:15.29 total

# MacPro4,1 Intel Quad Core Xeon W5590 @ 3.3 Ghz / 45nm “Nehalem” / Early 2009 (rune)
324.57s user 10.70s system 451% cpu 1:14.24 total
323.17s user 10.56s system 451% cpu 1:13.97 total

Even in 2011, I figured the Mac Pro tower form factor was not long for this world. Thunderbolt had already arrived, and has turned out to be a pretty nice interconnect for storage, networking, and other high-bandwidth peripherals. Notably, video cards are not among the things that are typically worthwhile to use over thunderbolt (you’d need an external thunderbolt –> PCI chassis, and you’d have to live with sub-par graphics performance). Also keenly detecting a trend away from discrete GPU and towards ‘integrated’ GPUs in newer Macs, I wrote the following letter to a high-level apple executive:

We’ve never met, but I feel compelled to send this note. I know Apple does not comment on rumors, and I fully expect no reply.

*please* don’t further marginalize users who want the best possible GPU performance on ANY platform. Our current Mac Pro GPU options are already pretty bad compared to the rest of the desktop market, and all of our mobile devices still lag far behind high-end desktop GPU performance – even though our best desktop card (the ATI 5870) shipped over 3 years ago (!!), and our best mobile GPUs are much more recent.

While I realize it would probably be very easy to make the business case for not caring about the Mac Pro, please consider the possibility of maintaining it as sort of a loss-leader. The biggest, best Mac Pro attracts power users and power developers alike.

I’m no EE, but I do understand the basics of power / size / heat / performance ratios. Given similar technology, the larger card that uses more power will almost certainly be faster. That’s the one I want, and I’m not alone.

Now… if we can meet or exceed *high-end* desktop performance in a portable package, I’m all for it! Today’s portables aren’t that close, but … who knows what the future holds.

When a refreshed MacPro5,1 tower landed the following year in 2012, I was surprised – at least, until I looked at the specs, which were identical to the 2010 MacPro5,1 tower except for RAM and CPU. In other words, it seemed as though relatively little effort was put into this refresh, and it didn’t go very far in reassuring me that the Mac Pro had a future. “Not dead yet”, I thought.

I kept banging away, hope for the Mac Pro slowly fading over time, until October 2013 when the MacPro6,1 was announced. There were unanswered questions, but what we saw was pretty impressive, and would clearly keep the flame alive. I was extremely pleased to see such a head-on approach to the GPU problem, and it made me feel like my letter two years prior came at an interesting time for the people who built this thing. When I heard of the intended availability (Dec ’13), I probably put on a little sad face, because who would ever intentionally ship a product in December, unless there were larger constraints at play? I expected this date to slip, or for availability to be constrained at first. Which is all fine, because after waiting this long, a few more months seemed like nothing – and the worry was completely evaporated :)

They did manage to ship some new Mac Pros in 2013, but indeed availability was constrained for many months. It wasn’t until the second or third day of WWDC 2014 that the new mac pro was finally made available for purchase by employees at a discount (think: customers first). I placed my order within hours, and it shipped the following day!flux

It’s pretty much a dreamboat, even though single-core workloads are faster on a friggin iMac. The same avexporter test shown earlier clocks in at 1:03 on this mac, after logging about an ‘error loading GPU renderer’. When all the hardware resources are brought to bear (e.g. a stack of 19 effects rendering in real-time without any dropped frames in FCP X), the result is one you totally can’t achieve on an iMac – although you can get close by slapping some fast GPUs into a MacPro5,1.

There’s a fair amount of new architecture in this thing. I feel like MacPro6,1 is waaaay different from anything else Apple ships, and is decidedly ‘off the beaten path’. I have found a couple software oddities here and there that seem unique to this model, but nothing serious. In general, performance and reliability have been very good, and there’s nothing about the hardware that makes me uneasy. It even has a power light! Wow!

Shortly after I got foci, I also picked up a thunderbolt cable, even though the laptop was my first and only thunderbolt device. That cable hung in my closet until a couple hours ago, when I used it to benchmark a thunderbolt <–> thunderbolt network on the new pro.


This leads me to conclude that the original thunderbolt cable is also a thunderbolt2 cable. Fancy!

Custodial note: doing the above thunderbolt test requires connecting the cable across two different thunderbolt controllers, so e.g. from port 1 to 2, or 3 to 4, but not 5 to 6. Use the following diagram:


You’ll then need to create two thunderbolt bridges in the Network prefpane, and map each of the ports you’re using to a different bridge. Click the gear menu at the bottom of the interface list, then select “Manage Virtual Interfaces” to reveal the bridge editor.


tbridge-2 tbridge-3

Let’s see, what else… ramdisk i/o seems to top out at about 4 GB/s.

The LuxMark OpenCL benchmark tells a good story. MacPro6,1 is shown first, followed by MacPro4,1.newpro-luxmark luxmark-gtx-680-rune

I also picked up a Promise Pegasus2 R6, which benches faster than the iStorage Pro setup it is replacing, even though it’s got 2 fewer spindles.


Posted in mac pro | Leave a comment