So essentially a chat service? You're right that a LAN is simply a network of devices, not necessarily connected to the Internet, but from a machine perspective, communicating over LAN vs over the internet isn't all that different. It's what you want to do with the devices in your LAN that makes things more challenging.
At a super high level, you'd have your "server" code on your macbook, which would manage the devices attached to it, and your clients, which would reach out to the server. When you type a message, it will get sent to the macbook (server), which will then relay the message to all the other devices. This is something that can most definitely be done in Python (and probably swift, but I've never used it). For Python, look into the socket library, but I would do more research into the network stack in general before attempting to go into any code, especially if the end goal of this is to have a strong grasp of network programming.
YMMV, but when I was taking a networks class, the textbook we used was https://www.amazon.com/dp/0132856204/, which was surprisingly helpful, and explained things very well. I'm sure there are plenty of free resources out there as well though (and most likely free pdf versions of that book somewhere). At a glance, it looks like this link gives a very in-depth view of the network stack, and how data is transmitted.
Are you actually measuring and observing a performance hit? If not, I wouldn't worry about it. In HotSpot at least, object allocation is very, very fast (literally just incrementing a pointer), and the GC is optimized for short-lived objects (called the weak-generational hypothesis). Obviously whether or not this applies to Dalvik/ART is a different story, but I am guessing it does. Anyway, as the saying goes, premature optimization is the root of all evil. Don't care too much about it until you actually measure a significant performance hit.
Well, it varies by database. But I believe the most common scheme is a file of Slotted Pages. They look like this:
http://www.cubrid.org/files/attach/images/220547/497/656/postgresql_data_page_structure.png
The idea is the variable length values stack in from one end of the page, and they're indexed by an array of fixed width offsets that grows in from the other end of the page. The page header often has other info, like a bitmap of which nullable values are present or absent, so that null values in the tuples can be omitted.
Other database make other choices. Another common scheme is to store each column separately. This means to reconstruct a row you have to read as many pages as there are columns, so it hurts performance for fine grained update workloads, like say the backend of reddit. But for queries that scan all rows, but only care about a subset of columns, it can be much more efficient. So you see this structure commonly in databases targeted at analytics (examples: vertica, greenplum, etc).
Also, microsoft's research wing has published quite a bit about one of their newest storage engines, the bw-tree: http://research.microsoft.com/apps/pubs/default.aspx?id=178758
The papers are well written, and fairly approachable even if this is a new area for you. But, this is also a state of the art design that uses concepts like lock free algorithms, which may feel a touch alien if you're unfamiliar. Afaik nothing else uses a scheme quite like this, but it is shipping in MS products and cloud services (notably the hekaton main memory engine in MS SQL, and apparently parts of Azure DocumentDB).
Java doesn't do live reference counting. What it does is periodically follow all the references from static variables and local variables in each thread's stack to mark all objects that are reachable. Then it sweeps the heap and removes any objects not marked as reachable. The only reference counts that matter are zero and not-zero, and they aren't updated live, just during the garbage collection mark phase.
Here's a decent explanation of how the mark phase works. And here's an explanation of how the different generations work, and also an overview of the different garbage collection algorithms you can choose from.
Technically you are correct! I just wanted to correct that one sentence into a "more correct" sentence without writing a whole paragraph or linking to a wall of text. Here is the image I got at class when learning about embedded systems (and Android). This image still mentions the Dalvik runtime(VM, technically), not much changed with ART, more or less replace with or add ART next to Dalvik. Also: the guy was surprised I knew what fastboot and busybox was :p
Bonus: yes there is a typo in that image, idk...
So first of all, when you put code into reddit you can format it by code by indenting every line 4 blocks. Some of your code does this, but it looks like a bunch of your code is falling outside of those blocks.
This article talks about both ajax and socket.io and gives simple examples of the server and client talking to eachother with each: http://www.cubrid.org/blog/cubrid-appstools/nodejs-speed-dilemma-ajax-or-socket-io/
I think you are comparing apples and oranges here. There are existing java frameworks, i.e. netty or vert.x derivitives that (depending on the benchmark of course) will blow node out of the water. http://www.cubrid.org/blog/dev-platform/inside-vertx-comparison-with-nodejs/
You skip the servlet cruft and go back to square one (like node) on the networking i/o layer, and java still outperforms javascript...
I should say that you're missing out CUBRID, a much better alternative to MySQL than Firebird and BerkeleyDB. The great thing about CUBRID is that it provides over 90% SQL compatibility with MySQL. All your existing PHP apps can work seamlessly using the PHP API for CUBRID. All of its PHP functions are compatible with those of MySQL. Very convenient for developers when switching to CUBRID. Check it out, you may want to include it in your list.
I should say that you're missing out <a href="http://www.cubrid.org">CUBRID</a>, a much better alternative to MySQL than Firebird and BerkeleyDB. The great thing about CUBRID is that it provides over 90% SQL compatibility with MySQL. All your existing PHP apps can work seamlessly using the PHP API for CUBRID. All of its PHP functions are compatible with those of MySQL. Very convenient for developers when switching to CUBRID. Check it out, you may want to include it in your list.
I like CUBRID's native Migration Toolkit to convert from/to MySQL/Oracle/CUBRID. It's very fast (100GB in 10 hours) and provides both online and offline migration. A very convenient thing in CMT is the automatic data type mapping between source and target DB.