I'm interested in hard problems.
Recently, I've started thinking a lot about what CP6AN might look like.
Class::MOP and the Perl 6 Metamodel make me more excited than I'd like to admit.
Also expect occasional wordy technology-related rantings.
In a press release today, Imtel announced a new processor hardware component, called a Memory Management Unit.
"The MMU will supplement our existing VX-t technology in offloading virtualization overhead to the processor itself," said Imtel's Bob Brooke.
While it has been possible for years to give applications the illusion of controlling the computer's entire memory address space, the in-software techniques employed by hypervisors have always meant taking a performance hit, especially when it was necessary to swap memory to disk. The MMU allows applications to send instructions directly to the processor, negating the need for a software-based address translation layer.
But Imtel claims the biggest benefit will be hardware-enforced "memory protection", ensuring that different applications cannot read or modify each others' memory. Brooke went on to claim, "The containerization benefits of the MMU will provide unparalleled levels of security."
VMWear's Chan Du declined to comment specifically when asked if the MMU would be utilized by their upcoming hypervisor technology, the so-called Operating System Kernel. He would only say, "Operization Technology will provide many new ways to abstract hardware from individual applications, but it is our policy to not discuss features of unreleased products."
I'm convinced that some problems are considered hard mainly because they're really good at exposing unquestioned assumptions at different levels of a system.
In concurrency, one of those levels having its assumptions challenged is the programmer, but that's not the only reason it's hard.
If you don't know why a particular tool sucks, you haven't been using it for long enough.
This is especially true for programming languages and software.
2) Device drivers and operating systems are written exclusively in C. Now, you may never write a device driver or an operating system, but what if you are ever required to modify one?
Rare modification of device drivers aside; I think a better reason (sorta related to this) is that C was designed to write Unix in. Learning C may allow you to learn more about the internals of your OS.
4) C programs are smaller and faster then any other program created in a different language. Sometimes your program needs that speed boost that only C can give it.
False. Sorry, but this is no longer the case. Compiler research has come a long way.
For example, look at perl's highly fine-tuned regular expression engine written in C. CL-PCRE, a Perl-compatible regular expression library written in Lisp, claims to outperform perl's (when compiled with CMUCL). Yes, there are many problems whose C implementations are faster, but this does not hold true for all programs or all architectures.
5) If you have learned C, you can learn any modern programming language. The reason behind this is that all modern programming languages are based on C
Many programming languages have syntax that's kind of like C. But all? Scheme is based on Lisp. Prolog is almost nothing like C. As isn't Haskell. I have trouble believing that SQL and C share much in the way of parentage. XSL? I could go on.
Much more important than syntax is paradigm. My problem with the "learn C and you can learn anything" argument is that maybe if you learn C you can learn other procedural languages. Knowing C ain't gonna get ya very far in learning functional or logic programming. And most of the languages that are based on C have powerful language constructs that C lacks. Closures, for example.
8) C is the only language that teaches you what pointers really are. C# and Java skip the subject completely. It is pointers that give C its power.
A power we have yet to fully understand how to harness. Like goto. The indirection role is played by references in Perl and equivalents in other languages. The continguous addressable block of memory space? Arrays and iterators mostly take care of that. The fact that dynamically allocated memory is only accessible through indirection is not a feature, though.
Face it, C's pointers (and the language infrastructure they're related to) are the reason buffer overflows exist. Let's figure out how to stuff them back into Pandora's box before they become widespread. Oh wait, we didn't.
I can see the power of C's pointer implementation because it allows for all kinds of crazy Turing-machine possibilities. With those possibilities come risk.
9) C is still the most commonly required language for programming jobs.
My gut feeling is that the language of choice for PHBs these days is Java, but I have no data to back up my assertion either.
10) Anything that has a microprocessor in it has support for C. From your microwave to your cell phone, C powers technology.
I could sort of see this argument being valid if you were talking about IA32 (which seems to have a lot of C primitives as instructions), but otherwise not so much. The processor has no support for C.
If you want to argue almost every architecture has a C compiler for it, you'd be a lot closer. Some of this is because C is self-hosted, but some is also because C is good for getting close (but not too close) to hardware.
Please note that I am not trashing C. I happen to think it is also worth learning, for different reasons. I'll have to write those down later.
Step up, one and all, and see me modify a simple scalar only by reading it. Nothing up my sleeves.
my ($num, $let, $foo);
$num = "z";
$let = $num;
#$foo = 1 + $num; #this does not modify $num
print "num:$num let:$let\n";
Output with 1+$num line uncommented:
This means we have a line that makes no assignment to a variable, but still changes that variable.
Critics will call this dangerous and unexpected action at a distance. Proponents will note that a warning is triggered, read-only statements are still an indication of What You Mean, and that this probably makes excellent fodder for obfus.
Admittedly, this is (sorta) documented. Perlop mentions "If, however, the variable has been used in only string contexts since it was set," the ++ operator acts differently.
All this discussion of side-effects has reminded me of something.
A particular bit of code has, potentially, a return value, and side-effects. Other parts of the program have a dependency on a bit of code if they rely on either the code's return value, or side-effects. Think of return values as explicit dependencies, and side-effects as implicit dependencies.
Purely functional programming languages are easier to paralellize because all of the dependencies in the code are explicit.
It would seem that parallelization of non-functional programming languages would be possible (without explicit design by the programmer) if we had a better way of analyzing code to map out dependencies.
I'll expand on this a little bit when I have more time.
I unfortunately can't find the source right now, but there's a bit of wisdom that says that programs are written primarily to communicate with humans, and secondarily to communicate with compilers.
I bring this up because of the quote at the top of this perl5-porters summary.
Many of our furious debates over code style, especially whitespace formatting, are annoying precisely because they deal with code at only the human level. The compiler usually doesn't care about tab-stop preferences.
So I started thinking about being able to mark diffs as style changes versus code changes, so that version control history is preserved because you can query only the diffs that actually change how the program runs. I had some ideas about schemes involving some bizzare combination of perltidy, PPI (which can losslessly roundtrip code), B::Bytecode, and optree comparisons.
And then I realized maybe this wasn't such a great idea.
Something a lot of people in web design have been struggling with for years is how to separate content and presentation. Of course, the two really can't be completely separated. Say you have content that is structured purely on the basis of semantics. You want its display to reflect certain aspects of its structure.
It's really similar to trying to separate data and code. Everyone agrees that abstractly it's a good idea so you don't have problems like buffer overflows and SQL injection attacks. But some of the most powerful things computers are capable of require crossing the boundaries of data and code. Turing Machines are powerful for the precise reason of having data-that-is-also-code.
It's all nice for me to say the two should be separate, but it's an unrealistic oversimplification.
My heart goes out to the Reiser family.
Comments turned off, becuase I've already been saturated with discussion on this elsewhere.
CromeDome wrote about his reaction to the nature of the discussion, though.
If I do not use "the same terms as Perl itself", would that prevent you from using my code (especially CPAN modules)? Contributing to it? Distributing it? (EDIT: Not so much from a legal basis as if you'd decide "I don't want to have to figure out the legal crap here" and just give up. ENDIT)
This thread on the LKML is fascinating. Let's put aside discussions of the GPLv3 for a second and look at another argument Linus made in that thread, that the "or any later version" clause essentially means that you're agreeing to license your code under the terms of a license you haven't seen. I buy his argument, and so I'm trying to figure out how to license further code I work on.
Now, previously, when licensing FOSS Perl stuff I've written, I've used "the terms of Perl itself", nice, easy, and license-compatible with most other Perl stuff.
Well, the terms of Perl itself include the "or (at your option) any later version" clause. (The version of the GPL they specify is also Version 1, which I'm not actually sure I've read.)
So ideally what I'd like to do is license my code under the Artistic License 1.0 (or whatever the canonical name is for the current Artistic License) or the GPLv2.
Would this cause problems for others?
Because ultimately, if it will, I might decide that contributing code that people can use is more important than the rest of this licensing stuff. I welcome your input.
Please note that I'm not asking for legal advice. I'm asking about how
your actions would be influenced.
Also note that if I were to contribute to someone else's project, I would most likely license my code under whatever terms they had chosen to license their code.
I was really starting to miss jjohn's MarkovBlogger. Until I started getting what looks to be Markov-generated-spam. Usually it's just annoying but this one (which failed to actually advertise anything) was hillarious.
If Kafka, ee cummings, and Kent Beck all had a child who was dropped in its head repeatedly, the child might one day create this:
From: Mohammad Rutherford
Subject: Watch Chavez
so that you can spend (and impress cocktail party guests) challenging. Something But you don't just of the best practices be wrong (and what somewhere in the world
Patterns--the lessons more complex. (and too short) to spend somewhere in the world how patterns are the same software deep understanding of why patterns look in you don't want to words, in real world you have. You know when he casually mentions
also want to learn be wrong (and what In a way that lets you put sounds, how the Factory of Design Patterns so used in the Java API
and why everything
look "in the wild". a design paddle pattern. so you look to Design want to see how science, and learning theory,
so that you can spend up a creek without challenging. Something the embarrassment of thinking so that you can spend neurobiology, cognitive it struggling with academic In their native applications. You challenging.
used in the Java API same problems.
Singleton isn't as simple as it
your time is too important
it struggling with academic to use them (and when when he casually mentions (and impress cocktail party guests)
your time on...something design problems
Head First book, you know
when he casually mentions
Head First Design Patterns so that you can spend brain in a way that sticks. is so often misunderstood,
on your team. when he casually mentions
alone. At any given moment,
alone. At any given moment,
patterns look in who've faced the You'll easily counter with your alone. At any given moment,
"secret language" a design paddle pattern. reinvent the wheel
you have. You know Something more fun. design problems
, and how to exploit Design Patterns, you'll avoid his stunningly clever use of Command, In their native of Design Patterns so to use them (and when a book, you want
on your team.
support in your own code. neurobiology, cognitive
Best of all, in a way that won't
up a creek without Best of all, in a way that won't
learned by those
them to work immediately. reinvent the wheel what to expect--a visually-rich Best of all, in a way that won't to do instead). You want
when to use them, how With Design Patterns, matter--why to use them, (and too short) to spend
of patterns with others of patterns with others want to see how of patterns with others who've faced the
somewhere in the world
will load patterns into your of patterns with others Head First book, you know real OO design principles the patterns that
them to work immediately. texts. If you've read a better at solving software
between Decorator, Facade who've faced the
design problems who've faced the be wrong (and what challenging. Something his stunningly clever use of Command, sounds, how the Factory
be wrong (and what
Something more fun.
BTW, does use.perl have something equivalent to Perlmonks's
Edit:Dear lord. I just got another one that obviously uses Hitchhiker's Guide to the Galaxy as one of its source texts.