You are currently browsing the category archive for the ‘softwares’ category.

I love Youtube.  Every day, more or less on the average, I ends up spending (or at times wasting) some time there. Today was no exception, yet I was pleasantly surprised to hit upon some video taped lectures of Richard Hamming. These are apparently recorded in 1995 and are on a wide variety of topics. I didn’t get to go through all of them, which hopefully I will do sometime. I particularly liked this one on Discrete Evolution. The depth of knowledge these folks have is immense. What is astonishing is their ability and skill on connecting their point of expertise to vast majority of physical analogies. Long live internet!

It was interesting reading up on this piece of remake; somewhat a historical remake so to speak. That classic Paul Allen and Bill Gates photo shot as young geeks in 1981, now have a complementary remake with a new, yet ‘older’ avatar!

Looks like Mathematica 9 is released. I haven’t yet had a chance to take a look. Glancing through their release notes, a few interesting things I hope to try at some point are

– Signal Processing, which for some reason was fairly week on Mathematics till date, compared to Matlab for instance.
-The (random and social) network analysis tool is something I hope they made powerful.
-Integration with R.
-Time series, random process analysis new features and may be more.

Cary Huang and his collaborators made this stunning work showing  the scale of our Universe. We can get a gauge of a tiny measure Plank length to the grand size of observable Universe!. Work of these lads makes me speechless! It gives you a one shot view of various things.

Seeing this video, my daughter put us in a fix. Out of innocence, she asked us.  How big is the Universe? I said, we don’t even know precisely, how big it is. So far, the known size of observable Universe is so and so, I added. Then how come God knows all this, she probed. I followed: There are roughly two school of thoughts. One who believes that God created all these and the other who  believes that everything including the Universe evolved over time.

She was quick to say that she belongs to the second category. My wife was instant to claim the first the league affiliation. She asked what would I chose: Wife or daughter’s side!  Would rather evade that, I nodded . My kid wouldn’t let me escape that easily. Finally gave in and I said, I am more inclined to believe the second! She was all happy!

The one argument I was evading all along till today,  came all too sudden! I simply wanted them to figure it out and rationalize themselves in the years to come, without any parental influence or bias. But kids at times surprises us, don’t they? The profund words of Wordsworth lingered, The Child is the father of man! Truly!

A sad end to what looked like a promising and prodigious mind, complicated by many wizardly , perhaps at times turbulent actions and more so haste reactions from various corners of our society including the law enforcement offices. The news of Aaron Swart’s death at the young age of 26 is disturbing. The man, who at the age of 14 sparked into stardom by creating the now popular tool RSS for information subscription is no more! More than the wizardly invention, his name unfortunately caught into wider limelight perhaps through the MIT/JSTOR documents retrieving case.  He had later championed to several causes on free information access.  The right to free information in the internet era once again had caught the worldwide attention with that drive. It is difficult to keep side of this case, because it is strangled with multiple levels of complications involving the right to information, ethics, social stigma, law of the land, money, business,a wizardly mind and of course the turbulence of human mind!

I read his Uncle’s statement, “He looked at the world, and had a certain logic in his brain, and the world didn’t necessarily fit in with that logic, and that was sometimes difficult.” I couldn’t agree more to these words of Mr Wolf on Swartz. Don’t forget he was an ardent contributor to Wikipedia as well. Rest in peace Aaron!

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of  \mathcal{O}(n\log n), which offers significant gains over the conventional DFT complexity of \mathcal{O}\left(n^2\right) when the size n is large.

If we really dissect the FFT complexity limits, it is already pretty good. With n points to compute, the complexity will be proportional to n and roughly, the per node complexity is \log n.

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is k sparse (i.e.,  only k among n is significantly different from zero 0 ), it is fanciful to ask whether we can indeed get to the complexity \mathcal{O}(k \log n) and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for k sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any k and in the limit when k \to n, the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that  I have in mind. At the moment, I am hopeful!

Another stalwart, the founding father of Unix, C and in many ways one of the founding fathers of computing itself, Dennis Ritchie passed away. For me, Ritchie along with Kernigham was one of the first few names registered in mind, since learning computer programming. The first book I have ever seen on a programming language is this duo’s book on C. And boy wasn’t that most concise book in technology, every so compact and yet rich and clean!

Come to think of it, the impact of Ritchie to modern science and technology is enormous. He may not have been a very public figure, but his contributions indeed is the touchstone on the modern world, especially the information network world. Much of the internet, the Google, the iPhone’s and what more, almost everything we can think of runs on his stuffs or its variants. Quite remarkable.

I think the most apt summary of Ritchie’s contribution is heard from Brian Kernighan himself.  He said  “The tools that Dennis built -and their direct descendants – run pretty much everything today”

 

Just heard from my EPFL folks that Mathematica 8 is just released. I am yet to get a chance to see it working, but I will look forward to it someday. I am quite happy with Mathematica7 already, but Wolfram always bring radical new stuffs which is special. In fact I am mighty pleased with version 7, but who knows what all new stuffs there in 8? I have been a huge fan of Mathematica since the early 2000, ever since Nandu (Nandakishore Santhi) introduced his new tool. tI use it pretty much for every computational mathematics job. These days, I even use it for plotting, much more than Matlab. Looks like there are a lot of new features added in 8. One of the claim from Wolfram is that, they added a lot of new stuffs on aids to work with (probability) distributions, which I think is going to help me a lot. One stellar new thing I notice (from the announcement) is linguistic argument support. Boy, that is hell a lot killer application. Forget the syntax then. If you want to plot a sin(x) with grid on, type just that sentence! That’s it! Rest Mathematica will do. Wow! Wow! How much is it for an upgrade? Or should I go for a trial? I can’t wait!

A pretty cool handwritten tex symbol identifier software is unleased and it is known as Detexify. I thought this is such a handy piece of online suit for the TeX community. One can try to write  symbol and it simply display a list of nearest matching symbols.  There is no absolute guarantee that it display the intended latex symbol immediately, but it does the job pretty well on most occasions.

The much expected Wolfram alpha has gone for a soft launch since last night. It had some start up glitches, as Wolfram briefed during the live demo, but nothing major fortunately, prevented  me from getting a first feel of it. Erick Schonfeld  has a nice blog with a detailed first hand feel description of this new computing web search engine.  He also did a one to one comparison with Google for a few specific search queries.

My first impression is in much the same line as what I expected after reading Wolfram’s pre-launch blog. This is not a Google competitor for sure, but instead an incredibly complementing brother.  Wolfram alpha is more of a scientific and quantitative information search engine. For instance, if you want to know the Taylor series expansion of  exponential function e^{x}, you can do it easily by entering “Taylor series of Exp[x/2]”. As you would imagine, Google does not give this precise answer, but instead give you a list of documents matching this query, for instance a set of PDF links where this is already calculated. Clearly, Wolfram gives a more accurate and clever presentation of this query result. Wolfram alpha seem to use quite a lot of Mathematica capabilities too, like plot etc. Any mathematical query, will lead to pretty good result, sometimes including plots, histograms, Taylor expansions, approximations, derivatives, continuity etc. It is a nice feature to have for students and engineers.

wolfram1

This is the sort of query it likes the most and not something like “proof of Sanov’s theorem”. Google will incredibly list a set of documents which has the proof one is looking for, since it simply search down the web and display a listof  matching queries, ordered based on pagerank, which is loosely speaking in the order of relevance.

Not all queries are bound to get a result with wolfram alpha, atleast for now. That is expected since it is not yet in launch mode, but on soft launch. In the coming days they are likely to have it running full fledged with all kind od queries supported.

So, the wolfram alpha is definitely going to be useful for very many cases and it surely is going to rock in scientific searches. I initially thought the Google squared which is going to come from Google shortly is addressing the very same segment of search area, but it is clearly different.

I tried “tallest mountain Switzerland” . It gave a very nice cute quantified table. I love this kind of result. It is also state things with less ambiguity. For instance the height is mentioned in meter, but there is a list of unit conversions listed along, which help people to map them into the units of their convenience.

I tried a query “Who is Claude Shannon”. This is what it displayed. Of course, the result you get is a very brief information about him. Same query in Google will lead you to the more detailed Wikipedia entry of Shannon or may be the Mathworld entry of Shannon among the list of hits .  Wolfram alpha gives information more like in capsule form. If you need to know more, you should ask more. Clearly, what search engine to use is thus subject to the query type.  I strongly see Google and Wolfram alpha are complementary. Wolfram alpha gives more or less one reply to a single question. Of course you can renew the query and then get answer to that. In some sense, this is like people asking questions to one another in real physical scenario. Imagine you ask a friend, knowledgeable pal that is: Who is Shannon? He would perhaps start answering in those lines as Wolfram Alpha do. On repeated question he will give more details. On the other hand, Googling is like broadcasting your query to a large pool of friends, each one of them sends what they know or heard about Claude Shannon. It is you,who decides whichamong the many answer(s)/explanation(s) suit your need!

We can afford some amount of spelling errors while entering the query in wolfram alpha. Since it is natural language based, that is a decent feature to have. I deliberately typed the query “distnace from Bangalore to geneva ” instead of “distance from Bangalore to geneva “. It understood the intended query and displayed the result in a nice quantified table. Eve the geographical trace between the two places is shown. Incredible!

When I tried “weather in Lausanne”, this is as good as it gets.  Spot on with all possible things you want to know in one screen! It had a list of mountains and their heights mentioned!

In a nutshell, Wolfram alpha give you the best cooked food, given a user recipient as input. Google will give you a list of foods available and then you pick the one tasting suit . It  really then is a question of preference, time, and satisfaction of the end user on what to choose from. As far as I am concerned, it is subjective. I see both of these are invaluable and both will co-exist. Scientists,economists, finance folks, mathematicians, historians are all bound to benefit from this new computing engine.  I am waiting for a full release!

I am eagerly waiting for this new search and compute engine promised by Stephen Wolfram.  They call it wolfram|alpha (If google always went with the beta release, Wolfram is going even early).This, if it work in the promised lines is going to rock the Internet evolution. From the outset, this is not just a search engine. It is kind of an intelligent searcher who can loosely understand the human requirements. 

wolframalpha1

For long, it was perceived that a search engine driven by natural language processing is the way forward. But it is pretty hard to build such a system since natural language processing is no mean business.  Wolfram’s idea is to create an abstraction and then algorithm of these realizable models. Once we can do a mapping of the requirements  to algorithm that is computable, at least in principle we can build such a system. But that is a whole lot of heavy statements already. How easy it is to build all these methods and models into an algorithmic framework? He is using the New Kind of Science (NKS) armoury to realize that. We have to wait to get the full rainbow, but when he promises we can confidently expect something big. 

Now once the algorithmic mapping (and implementation) is done, then the question of natural interacting between humans and the system comes. Natural language is the way, but according to him we don’t have to worry about doing that as such. Once the knowledge of the individual is made into a computational framework, then that is enough.  I am not an expert in this natural language processing and NKS framework, but for sure this is pretty exciting,both from an algorithmic point of view as well as a practical MontBlanc. As Wolfram himself pointed out Pulling all of this together to create a true computational knowledge engine is a very difficult task. Indeed it is still being considered a difficult problem, both in academia and industry. So there is excitement aplenty in the offing. I am eagerly waiting for this to hit soon.

Considering that, the big wig search engine houses including Google are still struggling to make that dream natural language engines (the many pseudo ones in the market are not quite approved). I remember http://www.ask.com started their business in those lines, but never seemed to have crossed that elusive mark of acceptance, atleast not to an extend to capture a world wide wow!  If Wolfram has a new way to get this through, that will be a big breakthrough. I cant wait to see that. Wolfram promises that it is going to be very soon. He says it is in May 2009. My guess is that they will release it on May 14,2009.

I had earlier promised to update on the Xitip, when a windows setup is ready.  Though delayed, I have something to say now. I have finally made a windows installer for the (Information theoretic inequality proverXitip software, which was working pretty smoothly on linux, cygwin and mac for a while. I was not too keen on making this windows installer since a few DLL files are involved with it. Besides it was  a bit painful to include these nasty DLL files which would unnecessarily increase the bundle size.  Some of these may not be required if Gtk is already installed on the machine, but anyway I made one double click style version to suit the layman windows users in information theory community. 

Vaneet Aggarwal is the one who motivated me to make this up since he uses Windows. He showed some interest to use it, should a windows version be available. If atleast one user benefit from it, why not make it. In the process, I got to learn about an easy way to produce a windows install (setup maker) program. I used the freeware Install creator to produce it. 

I will put this installer available at the xitip website, but for the time  being you can access it from here. A lot of people suggested to revamp the xitip webpage which is pretty unclean at the moment. May be a short tutorial is impending. That will take a while; the next two and a half months are out of equation since I am pretty busy till then.

The latest talk/demo at TED opened up a fresh life to the possibility of a sixth sense.  The MIT Media labs now have unveiled a prototype of the sixth sense setup. The whole thing is  reasonably economical already and all indications are that this is going to rock some day. Incredible idea which went all the way to realization. Kudos to Pranav Mistry, Pattie Meas and their team.  One thing I am really hoping out of it is that, this paving way to assist disabled people. For instance a blind, deaf or dumb person finding avenues to get a sixth sense aid would be really helpful. 

http://www.ted.com/talks/pattie_maes_demos_the_sixth_sense.html

Video Link

Inkscape has come off age. Creating vector graphics has now become pretty cool with inkscape. I too tried to make one.  The one I tried is an egg. Afterall, I am convinced that egg is first and then chicken (I mean on eating preference). The drawing is not that stellar neat, but then, I am a novice when it comes to inkscape. This simply is my first drawing and I will be excused, Won’t I? I am posting the scalable vector graphics (svg) file, just in case a random enthusiastic reader find it useful to make it better!

The inkscape source file (svg) is available from this link;

Wondering why I created this figure? Well, I was trying to illustrate the P versus NP. My idea was to say the yellow yolk is P and white outer one surrounding the yolk would represent NP. Anyway that is for another post!

eggs drawn using inkscape
eggs drawn using inkscape
A gallery of svg files are archived (various folks contributed their entries there) at the open clip art library website (you can search and find some) http://www.openclipart.org/
Since you are my dear reader, it is my privilage to serve you something. Look, I took the extra pain to borrow these yummy stuffs (from here) for you. Enjoy this to get cooked.     

Egg ready to pan

Egg ready to pan

While waiting you can have a cheerful drink!
…and keep visiting me for more!

Last winter Etienne Perron, Suhas Diggavi and myself together, have developed a tool suit to prove inequalities in information theory. The tool is adapted from the previous work of Raymond Yeung and Ying-On Yan at Cornell. We have made it a complete C based software and removed the matlab dependency in the back end. There is also a pre-parser (using lex and yacc) built in to have flexibility on choosing random variable names. More importantly, a graphical front end is developed (using Gtk), which works well across the platform. Even though the beta version was ready in late 2007, for many reasons, including exhaustive testing (we always find scope for improvement) it was delayed. Last month, we finally made an official release. The original xitip project page in IPG has a short description and pointer to the exclusive Xitip page in EPFL (http://xitip.epfl.ch). A lot of things still need to be done, before we could say it is satisfactory. One of the main thing pending is the user guide and some kind of exemplified documentation. There is a technical report, I have prepared, but that is a bit too technical at the moment. Of course Raymond yeung’s amazing papers introducing the theoretical idea behind this prover and his book are valuable resources. I have tried to provide a little more easy understanding of the concept using some illustration and toy examples. I hope to put this report anyway in the EPFL repository sometime. The first version of the project discussing the background is available here in PDF form.

Xitip screenshot, the French version

Xitip screenshot, the French version

The software is open source. If you are not bothered to compile and make an executable yourself, then please download the binary executable and just run. It is just a matter of double click in the latter case. We have Linux, Windows, Windows(Cygwin) and Mac versions available. There are two different linear programming software used. One is a Gnu open source GLPK and the other one is Qsopt (developed at Gatech). The Qsopt version is faster than the GLPK. Just in case you are obsessed with a perfect open source model, you could avail the GLPK [5] version.

Hopefully during this summer we will get to complete the pending work on this project. If any of you happen to find it interesting please don’t forget to update us, on what you though about the software (Comments can be good, bad and ugly!).

Aside, I better mention this: Xitip is a software useful for proving (verifying) Information theoretic inequalities [7] only. Such inequalities contain expressions involving measures such as entropy, mutual information etc. It is a pretty handy tool if you are trying to prove some limiting bounds in information theory. In reality, there is broad classification of Shannon type and non-Shannon type inequalities. Non-Shannon type inequalities are not many, but they exist. Xitip at the moment is equipped to solve only the Shannon type inequalities. You can expect more information on this at the Xitip home page [2]

[1]http://ipg.epfl.ch/doku.php?id=en:research:xitip
[2]http://xitip.epfl.ch
[3]http://www2.isye.gatech.edu/~wcook/qsopt/
[4]http://user-www.ie.cuhk.edu.hk/~ITIP/
[5]http://www.gnu.org/software/glpk/
[6]http://en.wikipedia.org/wiki/Information_theory
[7]http://en.wikipedia.org/wiki/Inequalities_in_information_theory

A very useful trick for using psfrag with pdflatex came across here. I am just blindly copy pasting this (fearing that the original link go missing for whatever reason. Well it is not all that hard to remember, but someone may find it useful and save some internet searching time) from the source link [1]

[1]http://www.tat.physik.uni-tuebingen.de/~vogel/fragmaster/main.html

Using psfrag with pdflatex

German version

psfrag is a LaTeX package which allows to replace text elements in included EPS graphics by arbitrary LaTeX output. E.g. you can make fonts in your graphics match your document fonts or even include mathematical formulae in your graphics. For example:

 

\psfrag{x}{$x$}\psfrag{y}{$y = x^2$}\includegraphics{diagram}

When using latex (not pdflatex) the file diagram.eps will be included. The extension is appended automatically. While doing this, every occurrence of “x” in the diagram is replaced by “x” using math font and every “y” is replaced by the LaTeX formula “y = x^2”. Partial strings are not replaced, only completely matching strings.

Because psfrag uses Postscript for making the replacements, in principle you can’t use psfrag with pdflatex which doesn’t have any interfaces to postscript.

A possible way around the problem is the following:

The basic idea is to produce a new EPS from your original EPS which already contains all those psfrag replacements. This new EPS graphic actually can be converted to PDF including all replacements. The resulting “encapsulated” PDF then can be used with pdflatex.

To make such an EPS which already contains the replacements, it is necessary to create a separate LaTeX document for every EPS file you use. To simplify that task, I wrote a perl script which can be downloaded right here:

fragmaster.pl

This script needs: perl, latex, dvips and the common EPS to PDF converter script epstopdf.

To use the script you have to create two files per graphic:

 

  • _fm.eps: the EPS file itself,
  • _fm: a fragmaster control file.

From these files the psfragged graphics will be created:

 

  • .eps,
  • .pdf

The control file is basically a LaTeX file (with optionally special comments) and can look like this:

 

% Just an ordinary comment%% A special comment:% fmopt: width=6cm%% Another special comment:% head:% \usepackage{amsmath}% end head

% psfrag commands:\psfrag{x}{$x$}\psfrag{y}{$y = x^2$}

The special comment fmopt: will be evaluated such that the following text will by passed as optional argument to \includegraphics. This way you can e.g. adjust the relation between graphics size and font size using something like width=6cm.

The special comment construct "head:"/"end head" causes the lines in between to be included in the preamble of the LaTeX temporary document after having the leading comment characters “%” stripped off. This way, you can include LaTeX packages.

fragmaster.pl will scan the current directory for files which end in _fm and their _fm.eps counterparts. Looking at the modification dates, the script checks if the output files have to be remade and does so if necessary.

In your LaTeX document you can include the produced graphics using

\includegraphics{}

conveniently omitting the file extension. latex will choose the EPS, pdflatex will choose the PDF.

Example files:

 

 

Problems and solutions

In case the EPS will be produced as landscape graphics, i.e. gv shows “Landscape” instead of “Portrait” in the menu bar, and the graphic will end up turned around 90° in your document, then it is likely that your original EPS is wider than it is tall. In this case some (more recent) versions of dvips make the “smart” assumption that your graphic is landscape, even though the graphic’s proportions don’t tell anything about the orientation of its contents… Anyway, you can make dvips behave nicer by specifying the following line in /usr/share/texmf/dvips/config/config.pdf (or a local equivalent inside /usr/local/share/texmf):

@ custom 0pt 0pt

In the likely case that you’re wondering why, I’d recommend the dvipsk sources (to be found in the tetex bundle) warmly to you…

Have fun with the script! Feedback is very much appreciated.

Tilman

 

Links to other solutions

 

 

The Perlscript itself (downladed and interlaced from [1]) is given below. Copyright and others are as per listed below(see [1] as well).

#!/usr/bin/perl -w

######################################################################
# $Id: fragmaster.pl,v 1.3 2006/09/26 08:59:30 tvogel Exp $
#
# fragmaster.pl
# creates EPS and PDF graphics from source EPS and control files
# with \psfrag commands
#
# Copyright (C) 2004 Tilman Vogel (dot at dot)
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# IMPORTANT: ALLOW DVIPS TO MAKE _PORTRAIT_ PS WITH WIDTH > HEIGHT
# BY ADDING
#
# @ custom 0pt 0pt
#
# TO YOUR /usr/share/texmf/dvips/config/config.pdf
# IF THIS ENTRY IS MISSING, DVIPS WILL GUESS ORIENTATION FROM
# WIDTH / HEIGHT RATIO. THIS STILL CAN HAPPEN IN CASE YOUR INPUT EPS
# MATCHES A STANDARD PAPER SIZE!
#
# Source files:
# _fm.eps
# a source EPS file
# _fm
# a control file containing \psfrag commands and optionally
# special comments:
# % fmclass:
# use instead of “article”
# % fmclassopt:
# use as class options instead of “12pt”
# % head:
# %
# % end head
# causes to be put into the preamble
# % fmopt:
# causes to be given to \includegraphics as
# optional parameter
#
# fragmaster.pl scans the current directory for files matching the
# pattern “*_fm” and “*_fm.eps” and converts them to the respective
# “.eps”- and “.pdf”-files if they are outdated.
#
# Credits:
#
# This script was inspired by a posting from
# Karsten Roemke (dot at dot)
# with subject
# “psfrag pdflatex, lange her”
# in de.comp.text.tex on 2003-11-11 05:25:44 PST.
#
# Karsten Roemke was inspired for his solution by postings from
# Thomas Wimmer.

die “Current path contains whitespace. I am sorry, but LaTeX cannot handle this correctly, move somewhere else. Stopped”
if $cwd =~ /\s/;

foreach $fm_file () {
($base = $fm_file) =~ s/_fm$//;
$source = “$fm_file.eps”;

if(! -f $source) {
print “Cannot find EPS file ‘$source’ for fragmaster file ‘$fm_file’! Skipped.\n”;
next;
}

$dest_eps = “$base.eps”;
$dest_pdf = “$base.pdf”;

$do_it = 0;

$do_it = 1
if ! -f $dest_eps;
$do_it = 1
if ! -f $dest_pdf;

if(! $do_it) {
$oldest_dest = -M $dest_eps;
$oldest_dest = -M $dest_pdf
if -M $dest_pdf > $oldest_dest;

$youngest_source = -M $fm_file;
$youngest_source = -M $source
if -M $source $youngest_source;
}

if( $do_it ) {
print “$fm_file, $source -> $dest_eps, $dest_pdf\n”;

open FMFILE, “$tempdir/fm.tex”
or die “Cannot write LaTeX file!”;

$fmopt = “”;
@fmfile = ();
@fmhead = ();
$fmclass = “article”;
$fmclassopt = “12pt”;
while () {
chomp;
$fmopt = $1 if /fmopt:(.*)/;
$fmclass = $1 if /fmclass:(.*)/;
$fmclassopt = $1 if /fmclassopt:(.*)/;
if (/head:/) {
push @fmfile, ” $_%\n”;
while() {
chomp;
last if /end head/;
push @fmfile, ” $_%\n”;
# Remove comment prefix
s/^[\s%]*//;
push @fmhead, “$_%\n”;
}
}

push @fmfile, ” $_%\n”;
}

print TEXFILE <<“EOF”; \\documentclass[$fmclassopt]{$fmclass} \\usepackage{graphicx,psfrag,color} \\usepackage{german} EOF print TEXFILE foreach(@fmhead); print TEXFILE <<‘EOF’; \setlength{\topmargin}{-1in} \setlength{\headheight}{0pt} \setlength{\headsep}{0pt} \setlength{\topskip}{0pt} \setlength{\textheight}{\paperheight} \setlength{\oddsidemargin}{-1in} \setlength{\evensidemargin}{-1in} \setlength{\textwidth}{\paperwidth} \setlength{\parindent}{0pt} \special{! TeXDict begin /landplus90{true}store end } %\special{! statusdict /setpage undef } %\special{! statusdict /setpageparams undef } \pagestyle{empty} \newsavebox{\pict} EOF print TEXFILE “\\graphicspath{{../}}\n”; print TEXFILE <<‘EOF’; \begin{document} \begin{lrbox}{\pict}% EOF print TEXFILE foreach (@fmfile); print TEXFILE ” \\includegraphics[$fmopt]{$source}%\n”; print TEXFILE <<‘EOF’; \end{lrbox} \special{papersize=\the\wd\pict,\the\ht\pict} \usebox{\pict} \end{document} EOF close TEXFILE; chdir($tempdir) or die “Cannot chdir to $tempdir!”; system(“latex fm.tex”) / 256 == 0 or die “Cannot latex fm.tex!”; # Using -E here, causes dvips to detect # the psfrag phantom stuff and to set the BoundingBox wrong system(“dvips -E -P pdf fm.dvi -o fm.ps”) / 256 == 0 or die “Cannot dvips!”; chdir(“..”) or die “Cannot chdir back up!”; open PS, “$dest_eps”
or die “Cannot write $dest_eps!”;

# Correct the bounding box by setting the left margin to 0
# top margin to top of letterpaper!
# (I hope that is general enough…)
$saw_bounding_box = 0;
while() {
if(! $saw_bounding_box) {
# if(s/^\%\%BoundingBox:\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/\%\%BoundingBox: 0 $2 $3 $4/) {
if(s/^\%\%BoundingBox:\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/\%\%BoundingBox: 0 $2 $3 792/) {
$saw_bounding_box = 1;
}
}
print EPS;
}

# Not using -E above causes
# papersizes to be included into the PS
# Strip off the specifications.
# Otherwise gv doesn’t show the BBox
# and epstopdf won’t detect the correct
# PDF media size!

# while() {
# s/^%!PS-Adobe.*/%!PS-Adobe-3.0 EPSF-3.0/;

# next if /^\%\%DocumentPaperSizes:/;
# if(/^\%\%BeginPaperSize:/) {
# while() {
# last if /^\%\%EndPaperSize/;
# }
# next;
# }
# s/statusdict \/setpage known/false/;
# s/statusdict \/setpageparams known/false/;
# print EPS;
# }

close EPS;
close PS;

system(“epstopdf $dest_eps –outfile=$dest_pdf”) / 256 == 0
or die “Cannot epstopdf!”;

system(“rm -rf $tempdir”) / 256 == 0
or die “Cannot remove $tempdir!”;

close FMFILE;

}
}

Pages

July 2017
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Like

%d bloggers like this: