Hacker Newsnew | past | comments | ask | show | jobs | submit | more YeGoblynQueenne's commentslogin

It's not standard but that's how I write Prolog. I thing I got it from SQL?

I don't usually leave the full-stop on its own line though. You can always select the entire line, then move one down to cut it without catching the full stop. If that makes sense?


It's not an issue. "+" is not a function in Prolog. So, for example, X = 1 + 1 means that the term 1 + 1, or +(1,1), is bound to the variable X. It doesn' t mean that 1 + 1 is evaluated, and its result assigned to the variable X. Prolog doesn't have assignment, and all its data structures are immutable, including variables, so once a variable is bound it stays bound for the scope of the current execution branch. That's unification.

The thing to keep in mind is that Prolog borrows its syntax and semantics from (a fragment of) First Order Logic, and that's where its treatment of functions comes from. In FOL there's a concept of a "term": a variable is a term, a constant is a term, and a function symbol followed by one or more comma-separated terms in parentheses is a term. So if f is a function symbol, f(X,a(b), c) is a term, where X is a variable (in Prolog notation where variables start with capital letters or _). Terms are mapped to functions over objects in the domain of discourse by a pre-interpretation (long story). Terms are arguments to atomic formulae which look exactly like terms with one or more arguments, except they have predicate symbols rather than function symbols, so if p is a predicate symbol, then p(f(X,a(b), c), d, 1, 2, 3) is an atomic formula, or atom.

There's a bit of terminological confusion here because Prolog calls everything a "term", including atomic formulae, and it calls its constants "atoms", but, well, you learn to deal with it.

The difference between terms (in both Prolog and FOL) and functions (in most programming languages) is that terms are not evaluated and they are not replaced by their values. Rather, during execution of a Prolog program, variables in a term are bound to values, by unification, and when execution is over, if you've got all your ducks in a row, all variables in your program should be bound to some term with no variables, and all terms be "ground" (i.e. have no variables).

That's because a ground term is essentially a proposition, and so it has a truth value of true or false. The point of FOL and of Prolog, and every other logic programming language is to carry out a proof of a theorem, expressed as a logic program. If a term has variables we can't know its truth value because it may correspond to a different value of a function, depending on the values of its variables. So to know the truth or falsehood of a term we need to ground it. Unification in Prolog is a dirty hack that allows the proof to proceed without fully grounding terms, until the truth of an entire theorem is known, at which point everything becomes ground (or should be ... more or less, it depends).

ASP (Answer Set Programming) instead starts by grounding every term. Then, it basically treats a logic program as a SAT formula and uses a SAT-solver to find its truth or falsehood (it does more than that: it gives you all models of the theorem, i.e. every set of ground atoms that makes it true).

And that's where the tiger got its stripes. Don't expect Prolog (or ASP for that matter) to work like other languages, it has its own semantics. You don't have to like them, but there's a reason why everything is the way it is.


I think that should be nonvar(A), nonvar(B) because the reason the unification succeeds and \+(A = B) fails is because A and B are variables (when called as foo(A,B). What confuses the author is unification, as far as I can tell.

But, really, that's just not good style. It's bound to fail at some point. It's supposed to be a simple example, but it ends up not being simple at all because the author is confused about what's it supposed to behave like.


>> Cut stops backtracking early which means you might miss valid solutions

That's right, but missing valid solutions doesn't mean that your program is "invalid", whatever that means. The author doesn't say.

Cuts are difficult and dangerous. The danger is that they make your program behave in unexpected ways. Then again, Prolor programs behave in unexpected ways even without the cut, and once you understand why, you can use the cut to make them behave.

In my experience, when one begins to program in Prolog, they pepper their code with cuts to try and stop unwanted bactracking, which can often be avoided by understanding why Prolog is backtracking in the first place. But that's a hard thing to get one's head around, so everyone who starts out makes a mess of their code with the cut.

There are very legitimate and safe ways to use cuts. Prolog textbooks sometimes introduce a terminology of "red" and "green" cuts. Red cuts change the set of answers found by a query, green cuts don't. And that, in itself, is already hard enough to get one's head around.

At first, don't use the cut, until you know what you're doing, is I think the best advice to give to beginner Prolog programmers. And to advanced ones sometimes. I've seen things...


> In my experience, when one begins to program in Prolog, they pepper their code with cuts to try and stop unwanted bactracking, which can often be avoided by understanding why Prolog is backtracking in the first place.

This gets to the heart of my problem with Prolog: it's sold as if it's logic programming - just write your first-order predicate logic and we'll solve it. But then to actually use it you have to understand how it's executed - "understanding why Prolog is backtracking in the first place".

At that point, I would just prefer a regular imperative programming language, where understanding how it's executed is really straightforward, combined with some nice unification library and maybe a backtracking library that I can use explicitly when they are the appropriate tools.


> This gets to the heart of my problem with Prolog: it's sold as if it's logic programming - just write your first-order predicate logic and we'll solve it. But then to actually use it you have to understand how it's executed

Prolog is a logic-flavored programming language. I don't recall Prolog ever being "sold" as pure logic. More likely, an uninformed person simply assumed that Prolog used pure logic.

Complaining that Prolog logic doesn't match mathematical logic is like complaining that C++ objects don't accurately model real-life objects.


    I don't recall Prolog ever being "sold" as pure logic.
One of the guides linked above describes it as:

    The core of Prolog is restricted to a Turing complete subset of first-order predicate logic called Horn clauses

> The core of Prolog is restricted to a Turing complete subset of first-order predicate logic called Horn clauses

Does this sound to you like an attempt to deceive the reader into believing, as the GP comment stated, that the user can

> just write your first-order predicate logic and we'll solve it.


It absolutely does sound like "write your first order logic in this subset and we'll solve it". There's no reasonable expectation that it's going to do the impossible like solve decideability for first order logic.

> It absolutely does sound like "write your first order logic in this subset and we'll solve it".

No it does not. Please read the words that you are citing, not the words that you imagine. I honestly can't tell if you are unable to parse that sentence or if you a cynically lying about your interpretation in order to "win" an internet argument.

All programming languages are restricted, at least, to a "Turing complete subset of first-order predicate logic." There is absolutely no implication or suggestion of automatically solving any, much less most, first order logic queries.


Except it cannot decide all Horn clauses.

>> This gets to the heart of my problem with Prolog: it's sold as if it's logic programming - just write your first-order predicate logic and we'll solve it. But then to actually use it you have to understand how it's executed - "understanding why Prolog is backtracking in the first place".

Prolog isn't "sold" as a logic programming language. It is a logic programming language. Like, what else is it?

I have to be honest and say I've heard this criticism before and it's just letting the perfect be the enemy of the good. The criticism is really that Prolog is not a 100% purely declarative language with 100% the same syntax and semantics as First Order Logic.

Well, it isn't, but if it was, it would be unusable. That would make the critics very happy, or at least the kind of critics that don't want anyone else to have cool stuff, but in the current timeline we just have a programming language that defines the logic programming paradigm, so it makes no sense to say it isn't a logic programming language.

Edit:

>> At that point, I would just prefer a regular imperative programming language, where understanding how it's executed is really straightforward, combined with some nice unification library and maybe a backtracking library that I can use explicitly when they are the appropriate tools.

Yeah, see what I mean? Let's just use Python, or Java, or C++ instead, which has 0% of FOL syntax and semantics and is 0% declarative (or maybe 10% in the case of C++ templates). Because we can't make do with 99% logic-based and declarative, gosh no. Better have no alternative than have a less than absolutely idealised perfect ivory tower alternative.

Btw, Prolog's value is its SLD-Resolution based interpretation. Backtracking is an implementation detail. If you need backtracking use yield or whatever other keyword your favourite imperative language gives you. As to unification, good luck with a "nice unification library" for other languages. Most programmers can't even get their head around regexes. And good luck convincing functional programmers that "two-way pattern matching" (i.e. unification) is less deadly than the Bubonic Plague.


> Red cuts change the set of answers found by a query, green cuts don't.

Ohhh, interesting. So a green cut is basically what I described as cutting branches you know are a waste of time, and red cuts are the ones where you're wrong and cut real solutions?

> At first, don't use the cut, until you know what you're doing, is I think the best advice to give to beginner Prolog programmers. And to advanced ones sometimes. I've seen things...

Yeah, I'm wondering how much of this is almost social or use-case in nature?

E.g., I'm experimenting with Prolog strictly as a logic language and I experiment with (at a really novice level) things like program synthesis or model-to-model transformations to emulate macro systems that flow kind of how JetBrains MPS handles similar things. I'm basically just trying to bend and flex bidirectional pure relations (I'm probably conflating fp terms here) because it's just sort of fun to me, yeah?

So cut _feels_ like something I'd only use if I were optimizing and largely just as something I'd never use because for my specific goals, it'd be kind of antithetical--and also I'm not an expert so it scares me. Basically I'm using it strictly because of the logic angle, and cut doesn't feel like a bad thing, but it feels like something I wouldn't use unless I created a situation where I needed it to get solutions faster or something--again, naively anyway.

Whereas if I were using Prolog as a daily GP language to actually get stuff done, which I know it's capable of, it makes a lot of sense to me to see cut and `break` as similar constructs for breaking out of a branch of computation that you know doesn't actually go anywhere?

I'm mostly spit-balling here and could be off base. Very much appreciate the response, either way.


>> So a green cut is basically what I described as cutting branches you know are a waste of time, and red cuts are the ones where you're wrong and cut real solutions?

Basically, yes, except it's not necessarily "wrong", just dangerous because it's tempting to use it when you don't really understand what answers you're cutting. So you may end up cutting answers you'd like to see after all. The "red" is supposed to signify danger. Think of it as red stripes, like.

Which make stuff go faster too (well, a little bit). So, yeah, cuts in general help the compiler/interpreter optimise code execution. I however use it much more for its ability to help me control my program. Prolog makes many concessions to efficiency and usability, and the upshot of this is you need to be aware of its idiosyncrasies, the cut being just one of them.

>> Whereas if I were using Prolog as a daily GP language to actually get stuff done, which I know it's capable of, it makes a lot of sense to me to see cut and `break` as similar constructs for breaking out of a branch of computation that you know doesn't actually go anywhere?

Cuts work like breaks sometimes, but not always. To give a clear example of where I always use cuts, there's a skeleton you use when you want to process the elements of a list that looks like this:

  list_processing([], Bind, Bind):-
    !. % <-- Easiest way to not backtrack once the list is empty.

  list_processing([X|Xs], ..., Acc, Bind, ... ):-
     condition(X)
     ,! % Easiest way to not fall over to the last clause.
     ,process_a(X,Y)
    ,list_processing(Xs, ..., [Y|Acc], Bind, ... ).

  list_processing([X|Xs], ..., Acc, Bind, ... ):-
     process_b(X,Y)
    ,list_processing(Xs, ..., [Y|Acc], Bind, ... ).
So, the first cut is a green cut because it doesn't change the set of answers your program will find, because once the list in the first argument is empty, it's empty, there's no more to process. However, Prolog will leave two choice points behind, for each of the other two clauses, because it can't know what you're trying to do, so it can't just stop because it found an empty list.

The second cut is technically a red cut: you'd get more answers if you allowed both process_a and process_b to modify your list's elements, but the point is you don't want that, so you cut as soon as you know you only want process_a. So this is forcing a path down one branch of search, not quite like a break (nor a continue).

You could also get the same behaviour without a cut, by e.g. having a negated condition(X) check in the last clause and also checking that the list is not empty in every other clause (most compilers are smart enough to know that means no more choice points are needed), but, why? All you gain this way is theoretical purity, and more verbose code. I prefer to just cut there and get it done. Others of course disagree.


>> Code logic is expressed entirely in rules, predicates which return true or false for certain values.

Open any Prolog programming textbook (Clocksin & Mellish, Bratko, Sterling & Shapiro, O'Keefe, anything) and the first thing you learn about Prolog is that "code logic" is expressed in facts and rules, and that Prolog predicates don't "return" anything.

The confusion only deepens after that. There are no boolean values? In an untyped language? Imagine that. true/0 and false/0 are not values? In a language where everything is a predicate? Imagine that. Complete lack of understanding that "=" is a unification operator, and that unification is not assignment, like, really, it's not, it's not just a fancy way to pretend you don't do assignment while sneaking it in by the backdoor to be all smug and laugh at the noobs who aren't in the in-group, it's unification, it doesn't work as you think it should work if you think it should work like assignment because everything is immutable so you really, really don't need assignment. Complete misunderstanding of the cut, and its real dangers, complete misunderstanding of Negation as Failure, a central concept in logic programming (including in ASP) and so on and so on and so on and on.

The author failed to do due diligence. And if they've written "in a positive light" about Prolog, I would prefer not to read it because I'll pull my remaining hair out, which is not much after reading this.


Is it your contention that the author doesn't understand that that Prolog predicates don't "return" anything, that they were expecting assignment rather than unification? I would read it again, their examples clearly state these (noting that the author does say "return", but also clearly shows bidirectional examples).

Both you and GP have had some fairly strong responses to what looked like mild complaints, the kind I would expect anyone to have with a language they've used enough to find edges to.


See this:

  The original example in the last section was this:

  foo(A, B) :-
      \+ (A = B),
      A = 1,
      B = 2.

  foo(1, 2) returns true, so you'd expect f(A, B) to return A=1, B=2. But it returns false.

foo(A,B) fails because \+(A = B) fails, because A = B succeeds. That's because = is not an assignment but a unification, and in the query foo(A,B), A and B are variables, so they always unify.

In fact here I'm not sure whether the author expects = to work as an assignment or an equality. In \+(A = B) they seem to expect it to work as an equality, but in A = 1, B = 2, they seem to expect it to work as an assignment. It is neither.

I appreciate unification is confusing and takes effort to get one's head around, but note that the author is selling a book titled LOGIC FOR PROGR∀MMERS (in small caps) so they should really try to understand what the damn heck this logic programming stuff is all about. The book is $30.


The author also wrote in the same article:

> This is also why you can't just write A = B+1: that unifies A with the compound term +(B, 1)


Yes, and then they were horribly confused about why foo(A,B) fails, regardless. They clearly have heard of unification and find it a fascinating concept but have no idea what it means.

Honestly, we don't have to wrap everyone on the internets in a fuzzy warm cocoon of acceptance. Sometimes people talk bullshit. If they're open to learn, that's great, but the author is snarkily dismissing any criticism, so they can stew in their ignorance as far as I am concerned.

Like the OP says, the author didn't bother to RTFM before griping.


And also "Prolog Programming for AI" by Bratko and "Programming in Prolog" by Clocksin and Mellish.

Although these days I'd recommend anyone interested in Prolog starts in at the deep end with "Foundations of Logic Programming" by George W. Lloyd, because I've learned the hard way that teaching Prolog as a mere programming language, without explaining the whole logic programming thing, fails.


Thanks for the reference. Have you ever worked with Maude? Curious what the advantages of one over the other might be. Maude seem like it might be more focused on being a meta logic, and I'm guessing it is probably easier to write programs in Prolog.

I've never worked with Maude.

>> I expect by this time tomorrow I'll have been Cunningham'd and there will be a 2000 word essay about how all of my gripes are either easily fixable by doing XYZ or how they are the best possible choice that Prolog could have made.

In that case I won't try to correct any of the author's misconceptions, but I'll advise anyone reading the article to not take anything the author says seriously because they are seriously confused and have no idea what they're talking about.

Sorry to be harsh, but it seems to me the author is trying their damnedest best to misunderstand everything ever written about Prolog, and to instead apply entirely the wrong abstractions to it. I don't want to go into the weeds, since the author doesn't seem ready to appreciate that, but Prolog isn't Python, or Java, or even Picat, and to say e.g. that Prolog predicates "return true or false" is a strong hint that the author failed to read any of the many textbooks on Prolog programming, because they all make sure to drill into you the fact that Prolog predicates don't "return" anything because they're not functions. And btw, Prolog does have functions, but like I say, not going into the weeds.

Just stay away. Very misinformed article.


The syntax of Prolog is (a fragment of) the syntax of First Order Logic. It's not supposed to look like your friendly neighbourhood programming language because it's mathematical notation.

Count yourself lucky you (probably) learned programming in a language like Java or Python, and not, say, FORTRAN. Because then you'd really pray for the simplicity and elegance of definite clauses.

(Or not. FORTRAN programmers can write FORTRAN in any language, even FORTRAN).


>> Programmers are (or were?) expensive because, at least in recent times, talented ones are expensive because they are rare enough.

In all the years I worked in the industry, I never knew anyone trying to hire "talented" programmers. Only trying to hire people, usually inexperienced juniors, willing to work twice the time they're paid for if you tell them how smart they are.


Ya, there is that also. But sane orgs will want to hire programmers with some level of talent, at least. Not just some kid out of bootcamp, they will have to show that they can actually program something first.


> In all the years I worked in the industry, I never knew anyone trying to hire "talented" programmers

I think this says more about the places you worked.


Harsh, but some people just gotta hear that.

... although it's a bit unfair to the many tech people who never wanted to throw artists down the loo or indeed anyone else. E.g. when I was fiddling with language generation during my MSc it never occurred to me that someone would want to use it to replace writing, let alone coding. What would be the point in that?


I mean, I know that — I was a developer. Generally, something doesn’t have to be universally true to be true enough to matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: