PEP Proposal a écrit :
> Gabriel Genellina wrote:
>> En Thu, 25 Sep 2008 16:24:58 -0300, escribió:
>>> sorry, I have these ideas for longer than 10 years, please have a look
>>> on it
>>> and comment on it. Thx.
>>> This is another proposal for introducing types into Python.
>> You got the terminology wrong. Python had "types" from the very start.
>> You're talking about some kind of generic functions, or an alternative
>> dispatch method.
> Typed parameters.

are unpythonic.

> Method-Declaration-filtered-typed parameters.

Philip Eby's RuleDispatch package goes way further, already exists, and
doesn't require any new syntax.

Posted On: Wednesday 7th of November 2012 12:23:24 PM Total Views:  529
View Complete with Replies

Related Messages:

PEP proposal optparse   (161 Views)
I would like to know your thoughts on a proposed change to optparse that I have planned. It is possible to add default values to multiple options using the set_defaults. However, when adding descriptions to options the developer has to specify it in each add_option() call. This results in unreadable code such as: parser.add_option('-q', '--quiet' , action="store_false", dest='verbose', help = 'Output less information') parser.add_option('-o', '--output' , type='string', dest='castordir' , metavar='' , help = 'specify the wanted CASTOR directory where to store the results tarball') parser.add_option('-r', '--prevrel' , type='string', dest='previousrel' , metavar='' , help = 'Top level dir of previous release for regression ****ysis' ) The same code could become much more readable if there was an equivalent method of set_defaults for the description/help of the option. The same code could then become: parser.set_description( verbose = 'Output less information', castordir = 'specify the wanted CASTOR directory where to store the results tarball', previousrel = 'Top level dir of previous release for regression ****ysis') parser.add_option('-q', '--quiet' , action="store_false", dest='verbose') parser.add_option('-o', '--output' , type='string', dest='castordir' , metavar='' ) parser.add_option('-r', '--prevrel' , type='string', dest='previousrel' , metavar='' ) Help descriptions can often be quite long and separating them in this fashion would, IMHO, be deable. Kind
proposal: give delattr ability to ignore missing attribute   (468 Views)
I would like to propose that functionality be added to delattr to handle the case when the attribute does not exist. First off, getattr handles this nicely with the default parameter: value = getattr(obj, 'foo', False) instead of: try: value = getattr(obj, 'foo') except AttributeError: value = False or: if hasattr(obj, 'foo'): value = getattr(obj, 'foo') else: value = False And I think it makes sense to have something similar for delattr (name the argument as you wish): delattr(obj, 'foo', allow_missing=True) instead of: try: delattr(obj, 'foo') except AttributeError: pass or: try: del except AttributeError: pass or: if hasattr(obj, 'foo') delattr(obj, 'foo') For backwards compatibility, allow_missing would default to False. Gary
del and sets proposal   (169 Views)
You can do the following: a = [1,2,3,4,5] del a[0] and a = {1:'1', 2: '2', 3: '3', 4:'4', 5:'5'} del a[1] why doesn't it work the same for sets (particularly since sets are based on a dictionary) a = set([1,2,3,4,5]) del a[1] Yes I know that sets have a remove method (like lists), but since dictionaries don't have a remove method, shouldn't sets behave like more like dictionaries and less like lists IMHO del for sets is quite intuitive. I guess it is too late to change now. -Larry
proposal, change self. to .   (177 Views)
Am 03.08.2008, 12:51 Uhr, schrieb Equand : > how about changing the precious self. to . > imagine > > self.update() > > .update() > > simple right What about: class x: def x(self,ob): ob.doSomethingWith(self) Not so simple anymore, isn't it If you're not trolling, there's hundreds of reasons why the explicit self is as it is, and it's not going to go away, just as a thread that produced immense amounts of response demonstrated around a week ago. Read that, and rethink. --- Heiko.
feature proposal, debug on exception   (204 Views)
There's an occasional question here about how to get python to launch pdb on encountering an uncaught exception. The answer is to look in some ASPN recipe and do some weird magic. I guess that works, but it's another thing to remember or keep looking up when the occasion arises (some program crashes unexpectedly). I find myself manually adding tracing instead, finding out that I did it wrong and having to re-launch a long-running program, etc. I'd like to propose that debug-on-exception be made into a standard feature that is easy to enable, e.g. with a command line option or with a simple pdb call immediately after the import: import pdb pdb.debug_on_exception(True) ... Would there be big obstacles to this It would have saved me considerable hassle on a number of occasions. I'm constantly processing large data sets that will munch along happily for hours and hours before hitting some unanticipated condition in the data, and it would be great to trap immediately rather than have to ****yze the resulting stack dump and restart. , On May 21, 10:59 am, Paul Rubin wrote: > I'd like to propose that debug-on-exception be made into a standard > feature that is easy to enable, e.g. with a command line option > or with a simple pdb call immediately after the import: Forgive me if I've missed your point, but it seems you can already do this: can also be invoked as a script to debug other scripts. For example: python -m pdb When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb's state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program's exit. New in version 2.4: Restarting post-mortem behavior added.
Alternate indent proposal for python 3000   (237 Views)
I was considering putting together a proposal for an alternate block syntax for python, and I figured I'd post it here and see what the general reactions are. I did some searching, and while I found a lot of tab vs space debates, I didn't see anything like what I'm thinking of, so forgive me if this is a very dead horse. Generally speaking, I like the current block scheme just fine. I use python on a daily basis for system administration and text parsing tasks, and it works great for me. From time to time, though, I find myself needing a language for server- side includes in web pages. Because of the need to indent (and terminate indents), python seems an awkward choice for this, and it's easy for me to see why php and perl are more popular choices for this kind of task. Perhaps this is just my perception though. I feel that including some optional means to block code would be a big step in getting wider adoption of the language in web development and in general. I do understand though, that the current strict indenting is part of the core of the language, so... thoughts
AOP and pep 246   (147 Views)
I am interested in AOP in python. From here one naturally (or google-ly) reaches peak. But peak seems to be discontinued. Whereas pep-246 on adaptors seems to be rejected in favor of something else. What Can someone please throw some light on whats the current state of the art
A proposal for attribute lookup failures   (205 Views)
Proposal: When an attribute lookup fails for an object, check the top-level (and local scope) for a corresponding function or attribute and apply it as the called attribute if found, drop through to the exception otherwise. This is just syntactic sugar. Example: a = [1,2,3] a.len() # -> fails, # -> finds len() in the top-level symbol table, # -> applies len(a) # -> 3 a.foobar() # -> fails, # -> no foobar() in scope, # -> raise NameError Benefits: - Uniform OO style. Top-levels can be hidden as attributes of data. Most of the top-level functions / constructors can be considered as attributes of the data; e.g., an int() representation of a string can be considered as _part_ of the semantics of the string (i.e., one _meaning_ of the string is an int representation); but doing it this way saves from storing the int (etc) data as part of the actual object. The trade-off is speed for space. - Ability to "add" attributes to built-in types (which is requested all the time!!) without having to sub-class a built-in type and initialize all instances as the sub-class. E.g., one can simply define flub() in the top-level (local) namespace, and then use "blah".flub() as if the built-in str class provided flub(). - Backwards compatible; one can use the top-level functions when deed. No change to existing code required. - Seemingly trivial to implement (though I don't know much C). On attribute lookup failure, simply iterate the symbol table looking for a match, otherwise raise the exception (like current implementation). Drawbacks: - Could hide the fact that an extra (On) lookup on the symbol table is necessary for attribute lookup failure. (Maybe there could be a switch/pragma to enable (or disable) the functionality) - As above, attribute lookup failure requires an extra lookup on the symbol table, when normally it would fall through directly to exception. - Disclaimer: I realize that very often what seems good to me, ends up being half- assed, backwards and generally bad. So I'd appreciate input on this proposition. Don't take it that I think the idea is wonderful and am trying to push it. I am just throwing it out there to see what may become of it.
RE: sorteddict PEP proposal [started off as orderedict]   (216 Views)
> From: Paul Hankin > > > Here's a first go. Sorting occurs when the keys are iterated over, > making it fast (almost as a dict) for construction, insertion, and > deletion, but slow if you're iterating a lot. You should look at some > use cases to decide if this approach is best, or if a sorted > datastructure should be used instead, but my instinct is that this is > a decent approach. Certainly, you're unlikely to get a simpler > implementation > > class sorteddict(dict): > "A sorted dictionary" > def __init__(self, arg=None, cmp=None, key=None, reverse=False): > if arg: > super(sorteddict, self).__init__(arg) > else: > super(sorteddict, self).__init__() > self._cmp = cmp > self._key = key > self._reverse = reverse > def keys(self): > return sorted(super(sorteddict, self).keys(), cmp=self._cmp, > key=self._key, reverse=self._reverse) > def iter_keys(self): > return (s for s in self.keys()) > def items(self): > return [(key, self[key]) for key in self.keys()] > def iter_items(self): > return ((key, self[key]) for key in self.keys()) > def values(self): > return [self[key] for key in self.keys()] > def iter_values(self): > return (self[key] for key in self.keys()) > def __str__(self): > return '{' + ', '.join('%s: %s' % (repr(k), repr(v)) > for k, v in self.iter_items()) + '}' > def __repr__(self): > return str(self) > def __iter__(self): > return self.iter_keys() You could speed up keys() at the cost of memory if you maintained a list of keys in the instance. Doing so would let you use an "unsorted" flag that gets set when a new key is added and checked when keys() is called. If the flag is unset, just return a copy of the list. Otherwise, sort the list in place, return a copy, and unset the flag. (Copies because you don't want the master key list to be modified by code using the class.) The use case for this seems to be when you have a dictionary that you need to often work through in sorted order. Sorting the keys every time keys() is called isn't an improvement over using a regular dict and sorting the keys normally. So the extra memory cost of maintaining an internal keys list looks reasonable to me. -- -Bill Hamilton
Only one week left for PyCon proposals!   (178 Views)
There is only one week left for PyCon tutorial & scheduled talk proposals. If you've been thinking about making a proposal, now's the time! Tutorial details and instructions here: Scheduled talk details and instructions here: The deadline is Friday, November 16. Don't put it off any longer! PyCon 2008: -- David Goodger PyCon 2008 Chair -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (Darwin) Comment: Using GnuPG with Mozilla - iD8DBQFHNSxarqIPjB1FxosRA3jCAJ92ns7uhdthR/Mo2NtNCYYlecRXyACffNrP Q9eUyWT4iqW3R4JbYc9Ab6w= =XcuL -----END PGP SIGNATURE-----
pep 3116 behaviour on non-blocking reads   (223 Views)
In the RawIOBase class I read the following: int) -> bytes Read up to n bytes from the object and return them. Fewer than n bytes may be returned if the operating system call returns fewer than n bytes. If 0 bytes are returned, this indicates end of file. If the object is in non-blocking mode and no bytes are available, the call returns None. I would like the developers to reconsider and return 0 bytes when no bytes are available and let None indicate end of file. The reason is that this can lead to clearer code that will work independant of the blocking mode of the stream. Consider a consumer that just has to treat each byte. Then with the current choice the code will look something like the following (assuming pep 315 is implemented) | do: | buf = | while buf != "": | if buf is not None: | for b in buf: | treat(b) If what the method returns follows my proposal, the code to do the same would look something like the following: | do: | buf = | while buf is not None: | for b in buff: | treat(b) The advantage with my propsal is that in a lot of cases an empty buffer can be treated just the same as a non-empty one and that is reflected in the return values of my proposal. -- Antoon Pardon
Reminder: call for proposals "Python Language and Libraries Track"for Europython 2006   (216 Views)
Registration for Europython (3-5 July) at CERN in Geneva is now open, if you feel submitting a talk proposal there's still time until the 31th of May. If you want to talk about a library you developed, or you know well and want to share your knowledge, or about how you are making the best out of Python through inventive/elegant idioms and patterns (or if you are a language guru willing to disseminate your wisdom), you can submit a proposal for the Python Language and Libraries track """ A track about Python the Language, all batteries included. Talks about the language, language evolution, patterns and idioms, implementations (CPython, IronPython, Jython, PyPy ...) and implementation issues belong to the track. So do talks about the standard library or interesting 3rd-party libraries (and frameworks), unless the gravitational pull of other tracks is stronger. """ The full call and submission links are at: Samuele Pedroni, Python Language and Libraries Track Chair
Reminder: PyCon proposals due in a week   (214 Views)
The deadline for PyCon 2006 submissions is now only a week away. If you've been procrastinating about putting your outline together, now's the time to get going... Call for Proposals: Proposal submission site: --amk
Reminder: PyCon proposal deadline now two weeks away   (239 Views)
Remember to send in your proposals for PyCon 2005; the deadline for submissions is December 31st, only two weeks away. Read the call for proposals for more details: Proposal submission site: PyCon will also feature BoF sessions, sprints, lightning talks, and open space for discussions. Please see the PyCon wiki at for more information, and to record your ideas and plan your events. --amk
site-packages, unzipepd there but import fails   (220 Views)
i unzipped and put the folder in site-packages. when i run install nothing happens. when i do import pp from shell it complains it doesnt exist. isnt placing the folder in site-packages enough these often dont work but normally it still works.
Call for proposals -- PyCon 2009   (516 Views)
Call for proposals -- PyCon 2009 -- =============================================================== Want to share your experience and expertise PyCon 2009 is looking for proposals to fill the formal presentation tracks. The PyCon conference days will be March 27-29, 2009 in Chicago, Illinois, preceded by the tutorial days (March 25-26), and followed by four days of development sprints (March 30-April 2). Previous PyCon conferences have had a broad range of presentations, from reports on academic and commercial projects to tutorials and case studies. We hope to continue that tradition this year. Online proposal submission will open on September 29, 2008. Proposals will be accepted through November 03, with acceptance notifications coming out on December 15. For the detailed call for proposals, please see: We look forward to seeing you in Chicago!