|Published (Last):||12 November 2015|
|PDF File Size:||12.54 Mb|
|ePub File Size:||3.56 Mb|
|Price:||Free* [*Free Regsitration Required]|
Privacy Terms. ImageMagick Skip to content. Quick links. ReadImage of PDF without "density" specified Questions and postings pertaining to the usage of ImageMagick regardless of the interface.
Usage questions are like "How do I use ImageMagick to create drop shadows? If I read a PDF with images of higher resolution e. This has been observed before by many others, and recently posted in a discussion viewtopic. One solution is to specify the density before reading the file. However, I know it's possible to compute the resolution of an image from the information contained just in a PDF. This is because Adobe Acrobat is able to read a PDF file, know the size of the page and images contained in it, and and display full resolution of the images in the file.
Further, I do not want to ask users of my program to specify the "density". They won't know what that means, they don't care to know, and will probably get it wrong every time. They want images to be full resolution.
Does anyone know how using the magickcore API? If no one has a solution, I do see one, extremely yucky alternative: parse the PDF file and compute it myself. However, it is possible using the following steps: 1 Read a PDF file as text. Get scaling of content image object. If so, your suggestion might be more usefully addressed to the Ghostscript people. While ImageMagick does make a call to Ghostscript, ImageMagick specifies a resolution in the call see file " The call essentially converts the pdf file to pnm, "portable anymap format", which is then read by ImageMagick.
It should be x, at least for my input file. While pdf. Unfortunately, Ghostscript also seems to have a problem, because if I call gswin32c.
It seems that the -r option has to be specified. Since I can't change the ImageMagick code, nor the Ghostscript code, I see no alternative but to parse the PDF file myself to determine the correct resolution.
You can make a change on your own copy, then suggest that patch is incorporated into the product. A tool that parses the PDF could give hints about a good density value for a good quality - at most. But in many cases, an automatic approach to obtain a value will reach a dead end. I mean, I myself am sometimes obtaining a 'good' value by looking at the properties of the background image for example.
And I'm also using a bunch of tools in some cases - and all that could be made easier if there was a tool that integrates all of the available techniques. There are some useful plug-ins in Adobe Acrobat, I think.
But probably even Adobe can't tell a "correct resolution" for all PDFs. Anyway, you basically would have to write a full-fledged PDF viewer in order to try to compute your values. Regarding your steps: Even for PDFs that contain only one image, it's usually not that easy. Some images can be cropped. Some images can be centered within the page - with an additional bleed area. Things like that. And for other PDFs, you will run into bigger problems.
I'm not an expert in the PDF format.. But I can say that simply looking at the content object will lead nowhere sometimes. First, the content object can be compressed not parsable. You would have to decompress the file first. Then: What if it contains fonts and vector graphics?
What if it contains hundreds of objects? That's not an exaggeration - that's what the PDF format was designed for not just holding a single rasterized image.
And, yes, Ghostscript would be the better place to integrate such jobs it already has to parse the whole PDF. But, as I said, for a big part of the PDFs, it's not easily possible or not possible at all to obtain 'the one' density value. And I also have to say that there are other, more important issues in Ghostscript, like full color management or correct antialiasing.
Issues that can be solved with a lot of work of course with parsing alone - contrary to the density value thing. To me, that's pretty strong evidence that there is enough information in the PDF to scale an image with full resolution to a given size.
And, given my cursory reading of the PDF spec, this seems to be true. But, there really is no one fixed density for all images in a PDF. Adobe Acrobat has a "pre-flight" tool that down samples images if they go over a limit, which seems to me to be pretty much equivalent to the -r option of gswin32c. I agree, it isn't good to write a hack to parse a PDF to get a resolution, not only because images have different densities, but because the expertise for parsing PDF's is in Ghostscript.
I will pursue this further in the Ghostscript forums to understand why gswin32c down samples images even when -r is not specified. This might involve up- or down-sampling. There is no escaping this, and any software has to do it. If the PDF contains multiple raster images at different resolutions as well as vector data at "no resolution" , some may up up-sized and others down-sized.
As a rule of thumb, the ImageMagick default of 72 dpi gives poor quality, so a user might supersample: choose a higher density then resize. I assume Adobe Acrobat does something similar without being told. With Adobe, that rather 'expensive' method shouldn't be necessary - as it does a 'regular' antialiasing for all vectorial elements.
Well, it's a bit off-topic.. This is used by many other programs that use ghostscript as its engine, for example xpdf. Fred's ImageMagick Scripts. So, to perform the rendering, it has to choose some density.
The fact that PDF's can also contain a bitmap results in the question: what density should it be rendered at? Ghostscript doesn't seem to offer that option, so Imagemagick is stuck.
For me at least, an "optimal" rendering would be at a density that is at least as great as the resolution for the image with the greatest resolution, so that detail is not lost in the rendering with Ghostscript. The density could of course be greater than this, but that would result in larger bitmaps. I think I can live with the fact that some images will be expanded. Supersampling antialiasing of bitmap objects for me is not necessary not sure what it would mean to antialias a bitmap itself , so Ghostscript rendering is fine as long as I can know apriori the "optimal" density.
I can solve my problem if I had a tool that could give me the densities of all the bitmap images in the PDF. Poking around a bit, there are some PDF parsers out there, but nothing seems to jump out. Anyone know of a general purpose PDF parser that I can hack? Otherwise, there's no "correct" way to compute what resolution to export at. What if a page has a ppi image and a ppi image?
Do you upscale the ppi image at a yucky ratio? What if a page has an image stretched in one dimension? What if a page has an image rotated 30 degrees? What if a page has only vector graphics? Those are mostly arbitrary decisions you'll have to make, I think.
ConstituteImage returns an image from the pixel data you supply. The pixel data must be in scanline order top-to-bottom. The data can be char, short int, int, float, or double. Float and double require the pixels to be normalized [ For example, to create a x image from unsigned red-green-blue character data, use:.
PHP | Imagick readImage() Function
Privacy Terms. ImageMagick Skip to content. Quick links. Include the ImageMagick version, OS, and any command-line required to reproduce the problem. Got a patch for a bug? Post it here.
Subscribe to RSS