Honestly, if they are already using DrRacket, The Structure and Interpretation of Computer Programs (SICP) has aged very well and, for an undergraduate class, is probably at least as good as anything else. You will want to strategically choose which parts you cover. It is still being used at MIT, for example.
(Edit: SICP, not SCIP.)
Typescript does this. Example. The return type is inferred to be a union type based on the return statements.
The halting problem is not an issue here. All you need is a type system that refuses to accept the program unless there is also a proof that the program is "correct" for some notion of correctness (e.g. that it terminates). It's not the job of the type system to automatically invent said proof (which would reduce to the halting problem). That's your job. All it has to do is verify the proof that you supply, which is relatively straightforward.
Coq is probably the language most commonly used for "provably correct" programs (although that's not a high bar to clear). Also, interestingly enough, "provably correct" programs such as CompCert still have bugfix releases. Probably because bugs can still creep into the specification, or when the output of the program is targeting machines or systems with less rigorous behaviour.
Racket is a lisp whose main goal is to simplify language creation.
You may also be interested in the Truffle/GraalVM project. Truffle is a library for creating interpreters in Java. Graal is an experimental compiler for the JVM which can take Truffle interpreters and compile programs in that interpreted language both JIT in the JVM and AOT into a binary executable.
This is a cool thing you've done! Writing a programming language is great fun, and yours looks really interesting.
That said, as another commenter said, this is really not HTML in any way at all. Can you put .html
as the extension and display it in a webbrowser as a webpage? If not, it's not HTML. Maybe change your readme to say that "the syntax is HTML-inspired / XML-based" or something more accurate like that.
Also, really cool that you've made a language server! Does it implement the Language Server Protocol and integrate with the VSCode plugin? If not, you should consider it.
For all the comments accusing forgiving parsers:
https://code.visualstudio.com/api/language-extensions/language-server-extension-guide#error-tolerant-parser-for-language-server
Error tolerant parsers are now a industrial standard which are used by IDEs. And LSP highly recommends language implementors to provide such parsers for code completion, etc.
Actually I already have a backend for my language that targets C (I started with that one) and one that targets LLVM. Both work, but I'm kind of regretting creating the LLVM one.
The C code I'm generating is pretty clean and pleasant to read. Surely much more than the LLVM IR.
My LLVM code generator, as a comparison, is a more complicated codebase, I've had bugs in the code generation more often, the generated IR code is much harder to read and it's platform-dependent: it needs to be re-generated differently for different target systems.
​
Overall I much prefer the C backend. I would like to introduce some optimizations that can't be expressed in C though. But giving up all the nice perks of the C backend only for this seems like a bad choice...
To expand the list, I believe Ceylon and Typed Racket have it.
Nice talk! I just watched this and posted a summary on lobst.ers, copied here:
Back story:
D is getting pretty close.
Great things about it...
https://dlang.org/spec/type.html
Byte, short, int, long defined as 8,16,32,64 alas, has the same horrible overflow semantics as C...
...but fixable in by library https://dlang.org/phobos/std_experimental_checkedint.html
That library is an amazing example of the power of Design by Introspection, allowing the user to select which overflow handling strategy to employ.
Don't forget to check out existing implementations like
https://dlang.org/blog/2017/08/23/d-as-a-better-c/
https://dlang.org/spec/betterc.html
I'm sure you can easily find a subset of Nim that can act as a better C. You can start by disabling the garbage collector.
I came across this paper (from 2003) when doing more research on concurrency approaches. In particular, I was looking into what approach to take for Inko's networking API: async or blocking IO. Since async IO requires quite a bit of plumbing (it needs a proper event loop, with proper polling support, etc, etc) I decided to first do some more research on the matter.
To summarise the paper (quoting its conclusion):
> Although event systems have been used to obtain good performance in high > concurrency systems, we have shown that similar or even higher performance can > be achieved with threads. Moreover, the simpler programming model and wealth of > compiler analyses that threaded systems afford gives threads an important > advantage over events when writing highly concurrent servers. In the future, we > advocate tight integration between the compiler and the thread system, which > will result in a programming model that offers a clean and simple interface to > the programmer while achieving superior performance.
Combined with the presentation Thousands of Threads and Blocking I/O I'm now very strongly leaning towards ditching M:N, and just going with 1:1 plus blocking IO for Inko.
> But that's not really accomplished by any OO language that I know of.
I'd recommend starting with learning more about existing programming languages before coming with a new one. For example IO does something similar (http://iolanguage.org/guide/guide.html).
You are not the first one to come up with a similar idea, to convince me you'd need to show that a) the idea has some practical benefits b) the language is the best way to implement it (you could do it as a library) and c) your implementation is better than others.
> objects are values
So if I will have an object holding references to other objects and I will send it, what will happen? How does that work with concurrency and parallelism?
> control can be yielded from one process to another
Explicitly (as coroutines do) or by runtime?
> But that latter function makes me squirm, thinking about the performance and memory impact of re-creating a complex game state data container structures 60x a second
It doesn't have to be recreated from scratch, your old state and the new one can share most data. Structural sharing is the keyword.
> The other area of conflict is whether data and function should be packaged together vs. apart
Functions can also be grouped together with data in a purely functional language. The difference to OO is that in a functional language you can't have a this
pointer, as that wouldn't make any sense.
> I am struggling to imagine how an FP design would gracefully handle such a ad hoc, loose confederation of dynamic parts
Via isolated processes that communicate via messages. You might want to look at erlang and elixir.
> Put another way, it's like if in LISP, instead of only being able to use one dialect at a time, it was possible to use several at once.
How does this compare to Racket [1] ?
Racket has a macro system which is used to implement other languages which are then able to operate together, although I think it is still one language per module.
A quick search gave me this paper: https://www.lua.org/doc/jucs05.pdf
Is this the one you are recommending? Even if it is not, it looks really cool so I would be reading it today :)
In Kotlin, the asynchronous function calls are sequential by default. If you want to make the execution asequential, you have to explicitly state it.
The reason behind that design is that in vast majority of the asynchronous functions, the flow is sequential, i.e. you await an asynchronous function before calling another one. Therefore they made it the default behavior.
Here is a piece of documentation regarding this issue: https://kotlinlang.org/docs/composing-suspending-functions.html
What's more telling about the profession is that the status quo in cloud services that need to be configured is YAML. And often YAML generated with Go templates or worse.
People rightly complain about 70's text sludge: shell, make, awk, m4, etc.
But IMO 2020's text sludge is even worse: YAML (complete with whitespace pitfalls just like make), with embedded shell, on top of weak and ugly specified macro languages expressed in YAML, like Github actions.
example: https://lobste.rs/s/v4crap/crustaceans_2021_will_be_year_technology#c_t7tj0u
I can see why you would say that a configuration language shouldn't be so desired. But it IS because the status quo is so exciting in a bad way. A boring configuration language would be a very good thing, but we don't have one.
Factor and its earlier cousin Joy are both homoiconic and allow functions to be manipulated as data structures at runtime, and I believe at compile time as well in Factor. Function values are called “quotations” in concatenative programming, and in these languages, they can be represented as an array containing a mix of “words” (named functions) and nested quotations. Concatenative languages have very simple structure, a bit like Lisp without all the parentheses. Of course you can also build more complex imperative-style notation on top of that basis; I’ve done some of that in Kitten to make it look more familiar to more programmers.
In Io, calling a method actually passes the message object as the argument, which is then evaluated as necessary by the receiver. In that way, you get messages as first class citizens and a way to apply special evaluation rules to method arguments without macros, for example to create for
loops as a function call rather than as a special case in syntax.
> You are essentially describing the "error monad" -- an Option[T]
value where the "null" case includes has some context (in the simplest form, just a string).
Why d'ya have to go mention a monad? :)
.oO ( And why not refer to Either
instead of Option
? )
/u/bjarkeebert might like the appropriate wikipedia page though it's at the abstract level, disregarding the particular application of this type for dealing with happy/sad processing.
And/or railway oriented programming.
> How would you choose a branch in an if/else
where the condition is an error?
In P6 it's the else
.
> How would you choose a branch in a switch/match/case
where the condition is an error?
In P6 it would be a case that's sufficiently consistent with an error. (See my other novella comment for details if you care/dare.)
> Most languages already have a feature just like this: NaN
!. Try coding until you have NaN
s popping up and then ponder how you feel about a language feature that silently consumes all work into a single late, silent failure if anything goes wrong at any point.
Right. OP took things to an extreme. But sometimes that's exactly the right thing to do to gain more insight into what's valuable in an initially half baked idea...
Hi there... Thanks for the quick reply!
​
Your list looks a lot like what can be found in this book... So I'm guessing it would be a good idea to publish them in the order they are introduced there?
​
But no, I'm not going to make graph visuals... As a rookie, I much prefer the Bubble Notation approach... Much easier to understand...
​
But is this even the right subreddit for this? Maybe I should be posting in a less advanced section, like r/learnprogramming or something?
Neat!
It seems similar in spirit to Maude and Io.
Side note: The head
and tail
functions are counterintuitive to me. From their names, I'd expect the behavior shown below but what they really seem to do is access the operator and operands respectively.
> head(list(1, 2)) 1 > tail(list(1, 2)) list(2)
Ok, let us look how facebook is doing it lately: they (not so) recently released Flow and Hack, two static typecheckers for dynamically typed languages.
Hence why gradual typing, in all its form, seems like a subject worthy of your interest. :)
Racket is probably a very good example (and their contract system might also interest you).
Check out Elixir. The =
operator is for pattern matching, not assignment. A sample IEx session showing off some pattern matching abilities, with the first interesting case at iex(4)>
:
iex(1)> x = 1 1 iex(2)> y = 2 2 iex(3)> z = [x,y] [1, 2] iex(4)> [_,b] = z [1, 2] iex(5)> b 2 iex(6)> m = %{p: z, q: 5} %{p: [1, 2], q: 5} iex(7)> %{p: v} = m %{p: [1, 2], q: 5} iex(8)> v [1, 2] iex(9)> %{q: w} = m %{p: [1, 2], q: 5} iex(10)> w 5 iex(11)> s = "Hello r/ProgrammingLanguages" "Hello r/ProgrammingLanguages" iex(12)> <<c, cs :: binary>> = s "Hello r/ProgrammingLanguages" iex(13)> c 72 iex(14)> cs "ello r/ProgrammingLanguages" iex(15)> <<"ello", cs :: binary>> = cs "ello r/ProgrammingLanguages" iex(16)> cs " r/ProgrammingLanguages"
This combined with Elixir's case
makes it easy to ingest data in appropriately sized pieces.
The link redirects to an article titled "Best Practices for Using Functional Programming in Python", which is not at all what I was expecting based on the post title.
Additionally, FP in Python is bad. No tail call optimization guaranteed by the language specification, plus lambda is broken in some sense (by which I mean it functions in a non-intuitive manner for people used to writing functional code). Additionally, FP style is generally considered unpythonic.
I don't really know what this article was supposed to bring to the table here.
I'm late to the party, but I'd say D is a pretty decent programming language modeled after c++ with some more modern design changes. One thing about it though is that if you need really high performance D has a garbage collector that can sometimes pause the thread, but this isn't really an issue for most applications.
I know, this is an educational language, but let me list some PLD antipatterns:
for( let i = 0 ; i < 50 ; i = i + 1 )
while(...) statement ;
Also, your language has a feature that is not quite an antipattern but still somewhat controversial: inheritance.
UPDATE: didn't immediately notice that your language is dynamically typed. Well, then the first problem is hard to avoid.
https://www.quora.com/Which-programming-languages-does-Google-use-internally
Unless you're trying to get a very specialized role, you'll almost certainly need experience with a few languages. Even if they're not languages used by Google, knowing a few would be a sign to the interviewers that you're willing to explore and grow, which are particularly important skills for a company like Google.
If you know nothing about programming, you'll have to decide where you want to start. C has the advantage of being relatively low-level and simple, but it's not particularly expressive. C++ is much more expressive than C without losing its low-level focus, but at the expense of a LOT of complexity. Java is like a somewhat refined C++ without as much low-level control, and C# is a much more refined Java. Everybody who uses Python seems to love it; same with Ruby. Or you could embrace 1960s technology and use Lisp or Clojure. (And that's not an insult to either language... it's impressive how relevant Lisp-like languages are 50 years later).
But maybe, right now, the best language to start with is JavaScript. Not because it's a particularly good language. Not because Node.js is very interesting. But because it's rapidly becoming the lingua franca of the software world. Find me a developer who knows 3 or more languages, and odds are that one of them is JS. You can get starting with basically no tooling (just a web browser and a text editor), though there are a lot of tools that you can grow into. I'm not suggesting that it should be the only language you learn, but it's a language that you will continue to use for the foreseeable future.
What does "pleasing" mean? This question is too vague to have an answer.
I think it all depends on the application domain, which you haven't stated. My opinion is that all languages are domain-specific languages. A feature that is pleasing or useful in one context could be useless or harmful in others.
It also depends on the goal. A construct that helps you write fast programs may not be one that helps you write clear programs. Or it may not help you write a program that you can collaborate with others on. Design is about tradeoffs and different goals are often at odds.
Which Wikipedia articles about programming languages are good? I prefer to search through citations rather than Wikipedia. I posted a good survey paper here that people seem to like:
https://lobste.rs/s/j1hwab/research_debt/comments/8zekeh#c_8zekeh
I also re-read "Confessions of a Used Programming Language Salesman" by Meijer recently, and really liked it. It went over my head the first time around. (Some of it still does.) A lot of is a journey from Haskell to LINQ -- from "high culture" to "low culture", which I like.
Do you know rust? It seems to me that you'd like the features of that language. If you already know rust then what do you think will be different in your language?
You should check out Nim.
It's not designed to be a drop-in replacement for Python, but it manages to keep a Python-ish syntax with a static type system and a lot of cool stuff extras. Plus it compiles to a native binary!
Use Go. Not hard to learn if you've dealt with other C-like languages, much closer to the metal (and thus faster) than interpreted or VM languages, type- and memory-safe unlike many other compiled C-like languages, and (perhaps most importantly for you) strong built-in support for very usable concurrency.
Only caveat is you probably don't want to use Go to generate the charts--do you already have a separate tool for that?
Python - Computer language for everybody: Easy to learn/read/write and Elegant design.
Reddit recommend you to learn Python -> https://www.reddit.com/r/learnprogramming/wiki/faq
Two exciting examples of minimalist syntax include:
the language Eve with the Search, Bind, Commit organization. It is a declarative only language which uses only one complex datatype which is sets of records.
the stack based Factor programming language. This is a point free language, there are no variables. This is perfect for an interpreter project because each language token is considered by itself. Syntax is just a series of words (well, they still use square brackets for a block of words) Each word either pushes itself on the stack (especially literals), or pulls values off the stack, processes it, and pushes new values onto the stack. This is the Forth language all grown up for the functional world.
Are you sure @DSLMarker doesn’t address what you’re talking about? By my reading of your comment, I think it does. But I could be misinterpreting you
Language specifications tend to include multiple parts: syntax, type system, and semantics.
The syntax spec mostly concerns what syntax is valid. This is often broken down into a description of how tokens are parsed[0], and then how tokens are put together to form the syntax.
When describing the type system, this includes what base types exist (like int
and bool
), how types can be put together (e.g. putting List
and String
together to get List<String>
), and the rules for deciding if code is well-typed. Note that you may have a type system even in a dynamically typed language. For example, Python needs a description of how classes behave.
Semantics describe what the language "does." In statically typed languages, this only covers well-typed expressions. It mostly covers obvious stuff, like what happens when the test part of an if
statement evaluates to true, or what variables are in scope where. Garbage collection details could be part of this spec (or it could leave that up to the implementation). This also contains minor details, like if you have an expression f(g(), h())
, it might specify that g()
is evaluated before h()
. It also needs to say what happens when you divide by zero or overflow an integer (and all the other edge cases).
The specification for Go might be worth browsing.
[0] e.g. is a+++b
parsed as a + ++b
or a++ + b
? (Or is it invalid?)
I think to write an IDE from scratch is kinda re-inventing the wheel, so I suggest you to write an extension for VSCode instead. They provide a lot of useful API such as error reporting, auto-complete, loading source code, etc.
Even Google's Dart language uses VSCode as their IDE.
You can check out an example at https://code.visualstudio.com/docs/extensions/example-language-server
I can't think of any use case for varargs (of heterogeneous type) that isn't printf
/ logging / etc.? I don't think I've encountered any in C or C++ code.
I know Rust implements printf with macros, and Zig uses compile-time metaprogramming (comptime). I'm not sure if either of them have varargs.
But anyway it seems like you don't really need it if you can express printf in a more general mechanism? I'd be interested in counterarguments.
If you don't have exceptions, I'd be worried about using multiple return values for errors, like Go. This article actually makes pretty good points, and the comment thread has some good ones too.
https://lobste.rs/s/yjvmlh/go_ing_insane_part_one_endless_error
I even noticed that C/C++ style "out params" and be more composable than multiple return values when one of the values is an error. It chains better and can be refactored.
A related issue is that Maybe
is nicer because you don't have the value hanging around in the case when an error occurs. This happens in C/C++ code too though, i.e. if the caller didn't check a return value and used the out param.
var value, err = foo() # caller must not use value if err is false
One area where I think this would be really useful is constructing safe parsers.
In other words, if you express a parser as a CFG, you know that an algorithm to recognize it can be constructed in O(n^3) time. For LL(1) and LALR(1) (yacc), it's O(n), although yacc has semantic actions which will break this guarantee.
So basically I would like a linear time parsing language, to fix the semantic actions problem, and add to encompass more languages. I don't want a hard tradeoff between top-down and bottom-up parsing -- it should encompass both styles.
This relates to langsec: http://langsec.org/
(I think there are some problems with this research that I've mentioned elsewhere [1], but the general idea is useful.)
[1] https://lobste.rs/s/uyjzjc/science_insecurity_meredith_l_patterson#c_pzjzxh
Great, glad you found it useful! Actually I think we talked about this last week on lobste.rs:
https://lobste.rs/s/q3gmyi/what_are_you_working_on_this_week#c_zigv9g
(I am andyc
there.)
I'm glad that someone else is working on this. I've wanted to work on it for over a year, but haven't been able to make it a priority, given all the other work. I look forward to any updates / blog posts on this experiment.
About unifying Lisp and ML: I know that /u/combinatorylogic is working somewhat in that area, although I haven't investigated the details.
I also got that sense from the Lux programming language, which was presented at Strange Loop:
The "modular compiler" stuff sounds like "putting most of the language in user space". I was a little skeptical because he threw it in at the end, saying "it will come real soon". I would rather have a demo than promises. But I believe the "modular compiler" and "unifying Lisp and ML" strands are somewhat related, and perhaps related to what you are thinking of.
Rust hit 1.0 5 or 6 years ago. C++ has been around for 30+ years.
Embark is using rust for commercial 3d applications including games.
Veloren is a cube-world like game made with rust. I dont know how commercial it is but it's a decent game.
Java is the language of the most popular game of all time. C# is the programming language of the currently most popular 3D engine, so I'm not sure what you're talking about.
I think hypocritical is fairly accurate in this situation. You are arguing that a language can only be judged on its commercial success but at the same time are pushing a language that has yet to be released nevermind successful commercial products. What would you call it?
The other commenters have linked to some great resources, but personally what I've found to be the best way to learn type theory is to just download Coq (or something similar, like Agda) and play with it. Software Foundations is a good way to learn Coq in particular.
Will do, I have a good feeling about this one.
I did some ARexx on the Amiga back in the days, but I only knew Pascal and Assembler at the time so eval unfortunately flew under my radar at the time. There's the Red language (http://www.red-lang.org/), haven't looked into it much though. The issue I have with most other languages is lack of deep integration with the C tool chain, which leads to bloated languages since they have to reinvent the world.
http://pypy.org/ (It's basically just a part of PyPy)
Also, it doesn't have a language spec because it's basically "Python, as long as it doesn't do anything dynamic the RPython toolchain can't deal with, like a = 1; a = "some string"
."
Most notably:
isinstance
to suss out which subclass you're dealing with when you need access to wrapped values.C++ (for http://strlen.com/lobster/). The motivation was that the compiler & runtime can easily be integrated in other projects, which in my case are typically game engines, which tend to be in C++. Also, having the compiler and runtime in the same language is kinda nice, and the runtime definitely needs either C or C++ (or Rust with a TON of unsafe). I started the project in 2011, so Rust wasn't on my radar as a choice, that probably would have been a better choice. Do I recommend it? I'd say the compiler part could have definitely been in an easier language, as it is not particularly performance sensitive. Might at some point rewrite it in the language itself :) Modern C++ is a relatively comfortable language if.. you've been using it since forever :).
In many cases, the visual part is independent from the language. For that, there is Jupyter Notebooks. https://jupyter.org/
It's not tied to a language, it's a protocol. It supports over 40 languages.
haxe is worth a look if you want something different: http://haxe.org/use-cases/games/
and if you want something really different i'd recommend racket: http://docs.racket-lang.org/games/index.html?q=%s
Go has just proposed a new error handling mechanism that will be introduced into the language, the check
expression.
It synergizes with the current error handling convention; Instead of
err, _ := some_other_op() if(err != nil) { return err; }
you'd just write check some_other_op()
with an optional handle
procedure in the function.
You should look at D's slices and what the advantages and disadvantages are there. Slices consist of two data structures and that might not be "low-level enough"?
The data itself is an array prefixed with its length, which enables boundary checking and thus safety. I would strongly recommend implicit boundary checking even if it costs some bytes and checking overhead. The additional safety and security is worth it. In other words, prefer Pascal strings over C strings.
On top of that array is a structure which consists of a length and reference to the array. This is two words and can be passed around by value. The length in addition to the length prefix enables some nice tricks like substring without copy. Also, reserve a bit somewhere to mark it as a read-only slice. This structure on top gives you very efficient string comparison but is more complex.
In D slices are mutable (unless marked const) which can lead to confusing behavior. For example, if you pass a slice to some function which changes the data, this might or might not change the original slice depending on its length.
A string in D is a slice of const chars where a char is an UTF8 code unit. There are also slices of bytes. Do you want to handle Unicode? Then go down the rabbit hole of code points, code units, graphemes, and grapheme clusters.
edit: u/vqrs is right. Thanks.
Looks like a pretty good start. I have a few basic questions and comments:
How do you intend to differentiate between prefix and infix notation? I presume - 1
returns −1 and - 1 3
returns −2, but does - 1 - 3
mean (-1) - (3)
(−4) or 1 - (-3)
(+4)? Or is that a syntax error?
What are your plans for the type system? I presume you’re going to differentiate objects into more types than just obj
.
.jlr
(J u L ia R) would make more sense than .jrl
for the file extension.
Assuming it’s meant to be pronounced /ˈdʒuːli.ə(r)/, it’s unfortunate that it has a nearly identical pronunciation to Julia, particularly in a non-rhotic accent.
Path-dependent types can desugar to existential quantification, and continuations can be used to simulate existential quantification using universal quantification. Thus, you can get the same effect in Java, TypeScript etc using generic continuations.
Here is your example in TypeScript with generic continuations:
interface Foo<Bar> {
new_bar(): Bar;
do_something_with_bar(bar: Bar): void;
}
declare function new_foo<T>(fn: <Bar>(foo: Foo<Bar>) => T): T;
new_foo(foo_0 => {
new_foo(foo_1 => {
const bar_0 = foo_0.new_bar();
const bar_1 = foo_1.new_bar();
foo_0.do_something_with_bar(bar_0); // OK
foo_0.do_something_with_bar(bar_1); // Compile error
})
});
That's pretty cool. Lua has an online demo but it times out if you try to do anything that takes any time at all (I assume because it's executed on their own servers).
Your page freezes temporarily instead. Would you be able to implement it in a way that doesn't do that?
There are several tutorial languages to move robots around on screen like blockly.
Things like screen size and taps instead of mouse are exactly what I was thinking about.
J is "APL with ASCII symbols". APL itself should be better suited because Unicode symbols condense the code even more. There seems to be an APL keyboard app to be paired with an APL interpreter.
WatAPL seems to focus on recreating the 1980's Watcom APL in DOSBox instead of building something new. I would envision an interface more like Jupyter/Matlab (graphical plots, images, videos, etc). Maybe gestures are an even better input method than custom keyboard keys?
I have implemented such a system before (save for making 2+3 * 5
a compile error, but I thought about it). I wouldn't say it's overkill because it's actually very simple to implement: you can represent operator precedence with a simple matrix, and the precedence parser is basically 30 lines of code in any language.
I have an implementation of the parser here (Parser.prototype.parse
) that uses numeric precedence, but all you need to do to implement Fortress's scheme would be to make Parser.prototype.order
look up operator pairs in a huge matrix. Most of the work would be populating that matrix.
I get that Javascript is a mess of paradigms but modern Javascript encourages you to use frameworks like React which encourages you to write in a more functional style:
const AddWelcome = (GreetingComponent) => { const TheNewComponent = (props) => <div><GreetingComponent {…props}/><p>Welcome to React!</p></div>;
return TheNewComponent; };
As you can see, this is a Higher-Order function, that is a function taking another function as input and returning a new function as output.
React’s development guidelines promote the creation of stateless components, that is components not using the state property. The output of a component only depends on its props. This stateless component looks a lot like a pure function.
Again, not purely functional as they have to use a strategy to manage state but the majority of Javascript code you see these days is more functional and prototype looking than not.
Nobody has remarked on your uint256 types. Are they really 256-bit unsigned?
If so, is this directly supported by the language (and which one)? Because these would be handled somewhat differently from common types like uint64.
Then such optimisations are usually done by a compiler. Big-integer types would make things more complicated and can make optimising harder.
If I write your example in C:
#include <stdio.h>
extern int z;
int main () {
int x = z*100/1000;
int y = z*1/10;
int s = z/10;
printf("%d %d %d %d\n",z, x,y,s);
}
Then using gcc-O3 will combine those three 1/10
operations into one, and turns that into a kind of multiply (something like * 0.1
, but for ints). You can put the code into http://godbolt.org and try different compilers and options.
But it will all be inside the compiler. Not inside the language (which is an abstract thing); nor the OS (only used to launch the program, and display the output), or the ALU, which simply does the operations it's told (the multiply and shifts the compiler outputs).
Kotlin docs for type-safe builders
If there is a common name beyond builders, at least Kotlin doesn't know what it is.
In Groovy and Kotlin it's just the natural result of allowing tailing lambda arguments that have their own receiver (i.e. this
). It's not, strictly speaking, a language feature so much as a pattern.
Fair enough! Yeah I'd imagine for a lot of software you'd be writing in C++ the context might be more relevant to enums and more strictness anyway.
So it all really depends on context.
These string literals are pretty awesome in webdev and for writing configuration formats (e.g. I define all my infrastructure like you would in Ansible, and in fact my system generates Ansible configs, ORMs, schema changes etc).
In JSX/TSX in React they give you typing for all your standard HTML attribs + CSS properties (without needing to import all the 1000s of possible values where the enum name might need to slightly vary from how it actually looks in HTML in the end).
And in TS 4.1, you can even combine multiple fragments into a single string type, see:
https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-1.html
e.g. Multiplying two string literal types (with 3x values each) into a combined one with 9x possible values. That is the exact same string used in the HTML.
If you want others to try your language, according to http://colinm.org/language_checklist.html you need to include IDE support (auto-complete, fix suggestion etc., auto import), not just syntax highlighting, to do that, I recommend you to look at https://code.visualstudio.com/docs/extensions/example-language-server .
Anyway, good job! Keep up the good work :)
It seems that there are already quite a few programs in the IOT dataflow space. The most interesting one that I've seen is Node-RED. Perhaps it could be adapted to also handle dataflow between programs on the local machine.
I think any of the languages would be fast enough, and all things being equal, I would choose the more productive one. And I agree D is interesting because it has garbage-collected data structures.
I had an exchange with someone interested in D here, and I posted my thoughts:
https://lobste.rs/s/glocqt/release_oil_shell_0_7_pre1#c_tkr0si
Basically the idea is that I'm going to work on automatic translation to C++. But it's possible that will fail or be subpar, and it would be nice to have other people pushing in parallel on a different codebase.
You would get a big "leg up" as I described in that post, because of all the DSLs.
Clojure has edn: https://github.com/edn-format/edn
Types are problematic in a distributed setting because they require a global view of the program. You can't upgrade an entire distributed system all at once. (If you can, it's a small system.)
Protocol buffers are sort of a pseudo-type system that are designed to solve this problem. (The RPC/IPC systems that other companies use like Thrift and Avro are more or less based on Google's protocol buffers.)
See Rich Hickey's "Maybe Not" video here and my comments with respect to protocol buffers / type systems in distributed programs:
https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey
Shipping code like lambdas is also a problem. In big systems you can't assume that every node is running the exact same version of the runtime. Otherwise how would you upgrade the runtime itself? There is no atomic upgrade. Upgrades must be done in a rolling manner with backward compatibility.
You can do that in Coq: ``` Implicit Type n : nat. Definition f n := n.
(* f
typechecks as nat -> nat
*)
```
There are situations where this comes in quite handy
That's why I said "Trac's take on Wiki markup adds easier code blocks". Trac Wiki markup. It still uses backticks for inline code, but for code blocks, it uses {{{#!language\n foobar}}}
instead of triple backticks.
Try https://meet.google.com/gpu-qxya-mfk -- that being said, today's lecture is about to finish in less than 5 minutes at the moment of writing (hopefully it's going to be recorded, https://twitter.com/christophkirsch/status/1239927586475708417).
This was a big part of Rebol’s dream: http://www.rebol.com/rebolcause.html
A lot of the ideas around the language focus on this messaging concept. Rebol today is sadly just a part time passion project of the creator, but Red seems to be carrying the torch: https://www.red-lang.org/
I'd look into multi-stage programming. Some languages work very similarly to your pseudocode, for example:
That's my understanding, too, on the HM question. Looks like they use Boehm for GC right now., but I don't know about plans on that. GC would have been a good question.
This really bit me in the ass when consuming json in javascript generated by PHP.
These days I like Crystal's middle ground. I shouldn't have to tell the compiler that "a=5" means integer. Crystal only complains when you trying to operate on incompatible types (like add a list to a string) or perform an operation on Nil.
I'm sure raku folk would welcome you (there's a link to the #raku IRC channel in spokesbug Camelia's welcome text).
Using IRC is ingrained in Raku's cultural DNA -- as is being kind and helpful in response to folk such as yourself who want to learn, have fun, and get stuff done.
That said, their primary focus is Raku, which is a programming language, so I think you'll find they'll encourage you to play with it (using online evaluator bots if you like) rather than any other PL.
Look in the <title> tag.
I think the original implementation was in Ruby, though it's ~~not~~ now Javascript. I also seem to recall a much stronger "less.js" branding effort many months ago, but I might be mistaken.
edit My favorite typo.
You might want to have a look at http://strlen.com/lobster/
While not a functional language per se, it is a language with a heavy focus on higher order functions and type inference, and is targeting quick game prototyping first and foremost.
> Another option would be to make code generators their own thing, and have that be integrated with the compiler - you specify it in the build file, that kind of thing.
LLVM Tablegen kinda' does this. http://llvm.org/docs/TableGen/
This is the top hit for "Pointer Compression" on Google. I had come across this back in January when I wrote the code for the "oheap" format [1], but I somehow forgot about it until I googled again.
If you skim the first 5 slides, the idea is clear -- divide your address space into 4GB pools so that 64-bit pointers can be compressed into 32-bit. He also makes a clear case that big pointers are bad for performance, for recursive data structures, such as ones in compilers.
I don't yet understand what the automatic algorithm is. My thing is like a special case where you manually manage everything.
Paper (the second one listed on Chris Lattner's publication page, won PLDI 2005 Best Paper Award):
http://llvm.org/pubs/2005-05-21-PLDI-PoolAlloc.html
This paper describes Automatic Pool Allocation, a transformation framework that segregates distinct instances of heap-based data structures into seperate memory pools and allows heuristics to be used to partially control the internal layout of those data structures. The primary goal of this work is performance improvement, not automatic memory management, and the paper makes several new contributions. The key contribution is a new compiler algorithm for partitioning heap objects in imperative programs based on a context-sensitive pointer analysis, including a novel strategy for correct handling of indirect (and potentially unsafe) function calls.
Although, just clear indication of the number of dimensions and their names helps a lot. Like what PyTorch has, but in types could be nice. I've seen a library for that somewhere, but I'm having trouble finding it at the moment.
What if there was a turing complete DSL for arbitrary recursive descent parsers that looked BNFish? Say, something like this:
say .parse: '3+4*5*(1+2)', rule => <sub-expr>
given
grammar { rule sub-expr { <expr>* % <op('+')> } rule expr { [ <number> | '(' ~ ')' <sub-expr> ]* % <op('*')> } rule number { <.digit>+ } rule op ($op) { $op } }
where a rule is a function whose calling protocol took care of whitespace handling, backtracking bookkeeping if really necessary, etc., and displayed a parse tree like this:
「3+4*5*(1+2)」 expr => 「3」 number => 「3」 op => 「+」 expr => 「4*5*(1+2)」 number => 「4」 op => 「」 number => 「5」 op => 「」 sub-expr => 「1+2」 expr => 「1」 number => 「1」 op => 「+」 expr => 「2」 number => 「2」
(Imagine that it scaled up nicely to handling arbitrarily complex parsing, AST generation, and so forth.)
What if this all just worked? Thoughts?
I am planning on making the tutorial in an online editor (like the one for svelte) to help make it more easily adoptable. Other than that, is there anything else you recommend to help make it more easily adoptable?
> Runtime contracts? as in, assert statements at the beginning of function bodies? Those are hacks/word-arounds for languages that don't have static typing. I used to do this in Javascript because I switched to Typescript.
No, as in Racket Contracts (https://docs.racket-lang.org/reference/contracts.html) or Clojure Spec (https://clojure.org/guides/spec).
> Rule systems? I'm not familiar with those but I have a gut feeling they only exacerbate the problem.
Obviously you're not.
In case you would consider that decades of hard work from experts could provide better insights than your gut feeling, check out Forward-chaining rule systems and logic engines.
> I don't understand the value you see in removing compile time type checking, honestly.
There are many well-known shortcomings to static type systems, as shown by the fact that we keep seeing new ones. Martin Odersky himself acknowledges that types introduce complexity, and it is well known that a type system can lead to over-specificity.
Sorry if my answer seems unfriendly, but I sincerely believe that our industry really suffers from the kind of attitude shown in your comment - being fixated one just "one paradigm above them all", and dismissing alternatives out of ignorance. If only the only people who made comparisons were those that actually tried all the alternatives...
To be clear, I'm not against static typing - I'm against lack of nuance in technological choices.
Exist a way for this, but requiere a incredible nailed IDE to make it work.
Is still text based.
Imagine HTML (or maybe RTF, is almost the same for this argument).
HTML have separation of content, layout and style (YMWV in how good). You can have visual representation of code, and posibility to embed images, multimedia and custom widgets per-page/ide/language so you "solve" how this could get some adoption.
However, expect the "browser" to be complex, at least similar to a regular word editor.
But is not that alien of concept, it have worked before (it other context) and we have a close concept, truly sucesfully implementation, in the case of Jupyter Notebooks (https://jupyter.org/).
I have thinking of this related to something else (a truly new terminal, NOT EMULATOR, but the same idea) and the vision is to have a HTML-like markup, that allow to switch widget rendering, so this could have (totally invented):
<codeDom lang="F#> <meta licence=MIT>
<namespace="Utils" file="Utils.fs"> <code> <doc> A minimal print function </doc>
let print x = printfn "%A" x
let blue = <type=Color render="ColorPicker">#00688B</type> </code>
The idea is that the doc is alive. Using static type compiler could annotate the code on the fly, (like note that hex is a color and can be rendered by a cool colorpicker), and could be made to work on dynamic languages.
Or the user annotate the code, if the tool fail.
Or maybe the doc is made from only includes:
<codeDom lang="F#> <meta licence=MIT>
<namespace="Utils" file="Utils.fs"> <code> <doc = "Utils.fs.doc#print"> <src = "Utils.fs"> </code>
So the files are standalone as "usual" and the doc is just the cool metadata. This way, the normal tools will work as expected, yet, we have a way to spice up the code!
If you want a slightly more robust, yet still simple, language that's essentially acting as an improved version of C with proper templates and metaprogramming, D has a stripped-down mode literally called "better C" which outright disables most of those fancy-pants extraneous features like a somewhat more sensible means of error handling (exceptions), OOP with inheritance and interfaces, memory management that doesn't make you want to drink, and a batteries-included std lib.
Although I'm not sure why you'd want to do that when you have the option of using a language with more features to make life easier.
in D there are what's called <u>User Defined Attributes</u>.
Declarations of any sort can be annotated with anything that's prefixed with @
. It becomes interesting because D has compile-time function evaluations, template meta programming, and compile time traits/reflection. What's annotated can then be retrieved at compile-time and used to create almost everything you want: new types, arrays (what matches to your idea of embedded meta data), new functions, etc.
I think some form GC is a natural fit for managing the memory of a cyclic graph of objects. A specialized incremental tracing GC that is constrained to run in under X microseconds per frame would be good for gaming, assuming you don't put too much pressure on it. If the hypothetical language had value types (like C#), this would work well.
Reference counting with cycle collection could also work, as long as the cycle collection is incremental. I know that Nim uses deferred reference counting with a configurable time limit for realtime applications like games. I believe the cycle detector is not yet incremental, though, and heaps are thread-local: https://nim-lang.org/docs/gc.html
If you want to get fancy, you could also take a look at Azul's Zing collector, which is a concurrently compacting pauseless collector for the JVM.
I haven't looked into Nim in a while. But in so far as I can tell, yes you can. But with the caveat that you can't do most of FFI so you miss out on chunks of the stdlib and the language (stuffs like manual memory management, goes without saying). You can try out the rudimentary repl with nim secret
. It was renamed to secret because people kept mistaking it for a full blown interpreter.
Nimscript is still a very nice subset. In addition to using it as a lightweight language that can be used as a build tool (the package manager Nimble supports and itself uses it), one can also embed it inside a Nim application as a scripting language (example). Don't know how easy would it be to use it like lua from other languages.
I haven't personally implemented something like this, but if you haven't already googled for "javascript source maps" that might be worth a look. For example:
https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_to/Use_a_source_map
https://developers.google.com/web/tools/chrome-devtools/javascript/source-maps
The source map is a separate file that apparently the web browser can interpret. How well it works I don't know :-/
Just by the way, I found a few other people trying to display maths in Java Processing
https://processing.org/discourse/beta/num_1257034116.html
There was also a paper on rendering maths with Java but the link was huge for some reason
I thought I'd also mention that you can use custom fonts when rendering and so you should be able to find a maths font (though complex systems won't be well represented), and worst case, there are great maths rendering web technologies, so many switch from JProcessing to a language you would do web development in and you'll be able to send mathJax or something similar to the browser (processing can do that, but it's tooling isn't my favourite for the web).
BTW, this reddits is about build languages, not using them for everyday tasks!
You need instead is use a good library/API that allow to implement correctly the security.
What your are asking is for a user auth flow.
If you wanna/need start from scratch, use a battle-tested web framework like django:
https://www.djangoproject.com/
with a good library for user auth on top..
Also, you will need to use HTTPS everywhere
After a while writing here, I note that exist a lot of small things that need to be take on mind, and is very easy to screw it!
So, instead use something like:
> I recall the Lua authors saying that they liked the structure of yacc at first to design the language, and then once it was stable, they rewrote it in recursive descent.
Yes, this is mentioned in The Implementation of Lua 5.0:
> Lua uses a hand-written scanner and a hand-written recursive descent parser. Until version 3.0, Lua used a parser produced by yacc, which proved a valuable tool when the language’s syntax was less stable. However, the hand-written parser is smaller, more efficient, more portable, and fully reentrant. It also provides better error messages.
I'm unsure what "fully reentrant" means here. Presumably that Lua can begin parsing some code, call itself to begin a new parse and when done, pop the stack and resume the first parse? Why would it need to do that?
Are generics planned?
From the Faq:
> Generics may well be added at some point. We don't feel an urgency for them, although we understand some programmers do.
> Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it. Meanwhile, Go's built-in maps and slices, plus the ability to use the empty interface to construct containers (with explicit unboxing) mean in many cases it is possible to write code that does what generics would enable, if less smoothly.
Yes, IRC is still used, e.g. freenode, with many active channels. Finding a good community can be difficult, as always.
This subreddit is for creating new programming languages, and has some overlap with #proglangdesign.
Do you need type inference? If not you can look at the presentation of static typing in this book, which doesn't have any theory:
https://www.amazon.com/Language-Implementation-Patterns-Domain-Specific-Programming/dp/193435645X
These are the two that were recommended to me on this topic:
https://smile.amazon.com/gp/product/110703650X/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
https://smile.amazon.com/gp/product/0262162091/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
I just tried it out. Or tried to try. I'm getting an exception about PresentationFramework.Aero2 missing. Looks like this is the same problem and a solution.
Thanks so much for the excellent explanation. I remember reading about this in Smalltalk, Objects, and Design when reading about polymorphism. They only focussed on the upside of not having to adjust methods. I did notice that when building functional stuff and then having to update a bunch of functions when I made a change to the data.
Yeah, I'm pretty excited about this stuff too. I think VS Code has ways of adding custom editors, but I'm not sure if you can extend the code area with custom literals.
And yeah, Snap! is really cool - would love to see a way of adding new blocks by programming using blocks themselves. Also integrating this stuff with type systems would be really neat too.
> When were tail calls added to the runtime, does C# handle them well or not?
I believe they were added in 2001. AFAIK C# does not support them.
There was a variant of Smalltalk with optional static typing called Strongtalk. You could add the type annotation or not and if you did provide it the compiler would try to use it to keep you out of the weeds. Objective C, which is pretty much a Smalltalk runtime written as a C library using function pointers as method handles, also provided additional compile time checking if you used real types. The plain object type in Objective C is just 'id'. Type something as id and the compiler will let you send any message to it. Type it as NSArray and it will warn (but not error) that you are sending messages that NSArray cannot handle.
FWIW, the Strongtalk runtime was purchased by Sun and is better known as HotspotVM today and we lost another chance to make Smalltalk a mainstream language. Lots of fundamental fumbles have kept Smalltalk from going mainstream. Really tragic. It is such a joy to work with.
There is a more recent experiment being done with Cuis (A Squeak fork) called Live Typing that attempts to infer typing from how code gets used.
Slideshare of Live Typing and here is a Quora question with some heavy hitters from the Smalltalk world weighing in on that question
There was a reason that I stated, "for me...". Reading balanced parens is quite easy, for me. Substituting seemingly-arbitrary glue tokens in lieu of punctuation and balanced-container tokens is very non-obvious, for me.
Note: I don't work in Haskell. I am planning to learn it (this book is on my night stand, waiting for me: https://smile.amazon.com/gp/product/1593272839/), but thus far the necessary spare time has not arrived.