100% agree with every post here so far. (Good advice and questions, /u/3diddy, /u/TheIncorrigible1, /u/arichtman, /u/Giordy77, /u/xradionut, /u/Windowsadmin, /u/uelmo, /u/dstrait!)
To summarize: it depends.
If you want to expand your mind, learn a totally different language:
If you want to directly amplify your PowerShell capabilities, C# or VB.Net.
SQL: I'll just leave this here.
It really just depends what you're interested in doing next.
The closest that I've come to building something in PowerShell and then rebuilding it in another language have been situations where I started fiddling around with some API or data or technique (because PowerShell makes it easy and fun), to the point where it became clear that PowerShell was not the right tool.
Examples:
Otherwise, like SeeminglyScience, I have a good sense at the outset that the final application will be developed in some other environment, but still use PowerShell to explore the problem space, sometimes even writing utilities that aren't part of the application but are still useful along the way.
You should really think about putting this in the what is it section of the site itself. It was the first thing I was curious about. Honestly, this looks exactly like the kind of tool that I have been waiting for for a very long time. Manually describing parsing for binary formats in code is really a pain, and is indeed incredibly bug-prone.
I'm not talking about the procedural/functional dichotomy here, I'm referring to the imperative/declarative one. Parser combinators for instance could be classed as imperative, but lie in the functional classification.
Mostly just that it defaults to requiring a lot of boilerplate for simple things that obscures the actual shape of the binary data. More boilerplate means more chance of introducing a bug, even if your are being memory safe. It also makes it harder to do different kinds of static analysis and generate different styles of API (streaming, tree-walking, etc) Lots of stuff can be derived from a declarative specification. Granted, the author addresses that when they compare it to Kaitai Struct (another influence of mine), saying:
> Kaitai Struct is in a similar space, generating safe parsers for multiple target programming languages from one declarative specification. Again, Puffs differs in that it is a complete (and performant) end to end implementation, not just for the structured parts of a file format. Repeating a point in the previous paragraph, the difficulty in decoding the GIF format isn't in the regularly-expressible part of the format, it's in the LZW compression. Kaitai's GIF parser returns the compressed LZW data as an opaque blob.
Personally I'd prefer to allow folks to drop down to a lower level (like parser combinators) at that stage, but I don't know if it's worth giving up declarativeness for the entire specification for that though. It remains to be seen whether I'll be successful though. ;)
Not a huge fan at first blush. I dislike transpilers generally (had to deal with some matlab-to-c code at one point and it was a nightmare).
If you are looking for parsing specifically, there is a topic called parser combinators which may be worth looking into. I have also heard good things about Kaitai, but have never used it personally.
Also while we're here figured I'd mention it: If you have c++14 or newer, I'd highly recommend boost-ext/sml for writing anything state-machine based.
>Il lavoro grosso sta invece nella definizione della forma e semantica dei campi, cioè dove sono posizionati, come si chiamano, come devono essere interpretati (signed int, unsigned int, lookup in una tabella... valori di quella tabella, rispettive etichette). > >Poi c'è un 50% di lavoro aggiuntivo necessario per fare il test della descrizione. Va fatto assolutamente perché la descrizione è un documento fittissimo, soggetto fortissimamente a errore umano. > >Questo lavoro non te lo fa nessuno, e se la piattaforma embedded su cui lavori è rara, è tutto lavoro che non si riusa. > >Se prevedi un sacco di riuso, ti definisci un bel DSL (domain specific language), magari embedded nel C++, che ti permette di definire i campi dei registri e la loro semantica, e fanno sia il packing che l'unpacking. In tal modo dai forma al linguaggio di definizione dei campi nella maniera che più ti fa comodo. > >Ma il lavoro grosso di inserimento della definizione è sempre da fare a mano.
Pensi che strumenti come kaitai.io o simili possono aiutare sotto questo aspetto?
If possible, you should find out what engine the VN uses, how the engine packs its files.
You can also use software like Kaitai Struct for writing a parser in an easier way. I haven't used it myself, but have heard it is quite good for this task.
I'm not sure if it helps, but here is a guide in Russian (hopefully google-translatable) of how to use it to extract data from VN package files of unknown format.
We've thought of more generic approach (i.e. with de-optimizations), but eventually we've settled with our "brute-force" way - it proved to be much more effective. As I said, the real problem was not the matching of individual functions per se, but combining hypothesis in a way that will look most natural — as a real human would have probably implemented them in a real-life project.
Unfortunately, main product of Kaitai project (i.e. a disassembler / decompiler) is kind of stalled now. I'd love to release it in open source, but the situation with copyrights is kind of messy and the rest of the team still believes that there are certain know-hows there that should be kept secret and exploited commercially.
http://kaitai.io is indeed a site of Kaitai project, though right now it's only used as a homepage for my smaller side project called "Kaitai Struct". You're most welcome to take a look, though :) It's heavily used in main Kaitai disassembler/decompiler as a tool for flexible description of data structures.